Chocolatine-32B-Instruct-DPO-v1.2

DPO fine-tuned of rombodawg/Rombos-LLM-V2.5-Qwen-32b based on Qwen/Qwen2.5-32B using the jpacifico/french-orca-dpo-pairs-revised rlhf dataset.
Training in French also improves the model, including in English, surpassing the performance of its base model.

Long-context Support up to 128K tokens and can generate up to 8K tokens.

OpenLLM Leaderboard

Coming soon.

OpenLLM French leaderboard

Coming soon.

Usage

You can run Chocolatine using the following code:

import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Limitations

The Chocolatine model series is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.

  • Developed by: Jonathan Pacifico, 2024
  • Model type: LLM
  • Language(s) (NLP): French, English
  • License: Apache 2.0

Made with ❤️ in France

Downloads last month
16
Safetensors
Model size
32.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for jpacifico/Chocolatine-32B-Instruct-DPO-v1.2

Merges
2 models
Quantizations
2 models

Dataset used to train jpacifico/Chocolatine-32B-Instruct-DPO-v1.2