Model Name: llama_3_orca_mini_v5_8b_dpo
llama_3_orca_mini_v5_8b trained with various DPO Datasets
Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!https://www.linkedin.com/in/pankajam Looking forward to connecting!
NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate!
Evaluation
Metric | Value |
---|---|
Avg. | 67.78 |
AI2 Reasoning Challenge (25-Shot) | 61.86 |
HellaSwag (10-Shot) | 82.35 |
MMLU (5-Shot) | 65.10 |
TruthfulQA (0-shot) | 56.24 |
Winogrande (5-shot) | 73.40 |
GSM8k (5-shot) | 67.70 |
Example Usage
Here is the ChatML prompt format
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
Below shows a code example on how to use this model
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v5_8b_dpo"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
This model is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Quants
GGUF : Coming Soon
AWQ: Coming Soon
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 19.96 |
IFEval (0-Shot) | 48.96 |
BBH (3-Shot) | 29.61 |
MATH Lvl 5 (4-Shot) | 7.48 |
GPQA (0-shot) | 3.24 |
MuSR (0-shot) | 6.94 |
MMLU-PRO (5-shot) | 23.51 |
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for pankajmathur/orca_mini_v5_8b_dpo
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard48.960
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard29.610
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard7.480
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.240
- acc_norm on MuSR (0-shot)Open LLM Leaderboard6.940
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard23.510