|
--- |
|
license: other |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
base_model: |
|
- NousResearch/Meta-Llama-3-8B-Instruct |
|
- mlabonne/OrpoLlama-3-8B |
|
- cognitivecomputations/dolphin-2.9-llama3-8b |
|
- Danielbrdz/Barcenas-Llama3-8b-ORPO |
|
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct |
|
- vicgalle/Configurable-Llama-3-8B-v0.3 |
|
- MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 |
|
model-index: |
|
- name: ChimeraLlama-3-8B-v3 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 44.08 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 27.65 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 7.85 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 5.59 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 8.38 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 29.65 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/ChimeraLlama-3-8B-v3 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# ChimeraLlama-3-8B-v3 |
|
|
|
ChimeraLlama-3-8B-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) |
|
* [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) |
|
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) |
|
* [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO) |
|
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) |
|
* [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3) |
|
* [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: NousResearch/Meta-Llama-3-8B |
|
# No parameters necessary for base model |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
density: 0.6 |
|
weight: 0.5 |
|
- model: mlabonne/OrpoLlama-3-8B |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
- model: cognitivecomputations/dolphin-2.9-llama3-8b |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO |
|
parameters: |
|
density: 0.55 |
|
weight: 0.2 |
|
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct |
|
parameters: |
|
density: 0.55 |
|
weight: 0.1 |
|
- model: vicgalle/Configurable-Llama-3-8B-v0.3 |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
- model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 |
|
parameters: |
|
density: 0.55 |
|
weight: 0.05 |
|
merge_method: dare_ties |
|
base_model: NousResearch/Meta-Llama-3-8B |
|
parameters: |
|
int8_mask: true |
|
dtype: float16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/ChimeraLlama-3-8B-v3" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__ChimeraLlama-3-8B-v3) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |20.53| |
|
|IFEval (0-Shot) |44.08| |
|
|BBH (3-Shot) |27.65| |
|
|MATH Lvl 5 (4-Shot)| 7.85| |
|
|GPQA (0-shot) | 5.59| |
|
|MuSR (0-shot) | 8.38| |
|
|MMLU-PRO (5-shot) |29.65| |
|
|
|
|