Text Generation
Transformers
Safetensors
English
mistral
code
text-generation-inference
conversational
Inference Endpoints

HUMAN EVAL SCORE!!!!

#1
by rombodawg - opened

Bro you cant post a model that a fine tuned version of one of the best coding models in existance and not post human eval scores.

PLEASE. Im begging you, post some human eval, and human eval+ scores for Gods sakes

lol yeah this is what i'm trying to do right now but it's taking too long to generate the sample file πŸ˜… i'm going probably run out of runpod credits before it's finished

If you or someone else want to try, please feel free!

Here is the code that I've been using to generate the sample:

!pip install human-eval
!pip install evalplus --upgrade
!pip install transformers
!pip install accelerate
!pip install sentencepiece
!pip install protobuf

from transformers import AutoTokenizer, AutoModelForCausalLM
from evalplus.data import get_human_eval_plus, write_jsonl
import torch

# initialize the model
model_path = "beowolx/CodeNinja-1.0-OpenChat-7B"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True)

def generate_one_completion(prompt: str):
    messages = [
        {"role": "user", "content": prompt},
        {"role": "assistant", "content": ""}  # Placeholder for the model's response
    ]

    # Apply the chat template to get the list of token IDs
    tokenizer.pad_token = tokenizer.eos_token
    input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, truncation=True, max_length=4096)

    # Generate completion
    generate_ids = model.generate(
        torch.tensor([input_ids]).to("cuda"),  # Convert list to tensor and send to GPU
        max_new_tokens=384,
        do_sample=True,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id
    )

    # Decode and clean up the completion
    completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True)
    completion = completion.split("\n\n\n")[0].strip()

    return completion


samples = [
    dict(task_id=task_id, solution=generate_one_completion(problem["prompt"]))
    for task_id, problem in get_human_eval_plus().items()
]
write_jsonl("samples.jsonl", samples)
beowolx changed discussion status to closed

Sign up or log in to comment