migtissera's picture
Adding Evaluation Results (#1)
9452a82 verified
|
raw
history blame
2.76 kB
metadata
license: other
tags:
  - generated_from_trainer
base_model: mistralai/Codestral-22B-v0.1
model-index:
  - name: home/ubuntu/trinity-codestral-1
    results: []

Trinity

Trinity is a coding specific Large Language Model series created by Migel Tissera.

Prompt Format

ChatML

Sample Inference Python Script:

import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "migtissera/Trinity-2-Codestral-22B-v0.2"

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto",
    load_in_4bit=True,
    trust_remote_code=False,
)

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=False)

terminators = [tokenizer.convert_tokens_to_ids("<|im_end|>")]


def generate_text(instruction):
    tokens = tokenizer.encode(instruction)
    tokens = torch.LongTensor(tokens).unsqueeze(0)
    tokens = tokens.to("cuda")

    instance = {
        "input_ids": tokens,
        "top_p": 1.0,
        "temperature": 0.75,
        "generate_len": 2048,
        "top_k": 50,
    }

    length = len(tokens[0])
    with torch.no_grad():
        rest = model.generate(
            input_ids=tokens,
            max_length=length + instance["generate_len"],
            use_cache=True,
            do_sample=True,
            top_p=instance["top_p"],
            temperature=instance["temperature"],
            top_k=instance["top_k"],
            num_return_sequences=1,
            pad_token_id=tokenizer.eos_token_id,
            eos_token_id=terminators,
        )
    output = rest[0][length:]
    string = tokenizer.decode(output, skip_special_tokens=True)
    return f"{string}"



conversation = f"""<|im_start|>system\nYou are Tess, a helful AI assitant. You always provide detailed answers without hesitation. Do not refuse.<|im_end|>\n<|im_start|>user\n"""

while True:
    user_input = input("You: ")
    llm_prompt = f"{conversation}{user_input}<|im_end|>\n<|im_start|>assistant\n"
    answer = generate_text(llm_prompt)
    print(answer)
    conversation = f"{llm_prompt}{answer}<|im_end|>\n<|im_start|>user\n"

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 21.87
IFEval (0-Shot) 43.45
BBH (3-Shot) 37.61
MATH Lvl 5 (4-Shot) 8.38
GPQA (0-shot) 6.71
MuSR (0-shot) 9.06
MMLU-PRO (5-shot) 26.00