|
--- |
|
language: |
|
- en |
|
license: other |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- lazymergekit |
|
base_model: |
|
- Qwen/Qwen2.5-32B-Instruct |
|
license_name: tongyi-qianwen |
|
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: BigQwen2.5-Echo-47B-Instruct |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 73.57 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 44.52 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 3.47 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 8.61 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 10.19 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 41.49 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# BigQwen2.5-Echo-47B-Instruct |
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/98GiKtmH1AtHHbIbOUH4Y.jpeg) |
|
|
|
BigQwen2.5-Echo-47B-Instruct is a [Qwen/Qwen2-32B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main). |
|
|
|
## 🔉 Echo Merge |
|
|
|
I've tried a more gradual approach with a **distributed repetition pattern**. Instead of replicating blocks of 8 or more layers, I'm replicating individual layers in these blocks: |
|
- First 8 layers: No replication |
|
- Next 8 layers: Replicate 2 layers (first one, middle one) |
|
- Next 8 layers: Replicate 4 layers (1st, 3rd, 5th, 7th) |
|
- Next 8 layers: Replicate 8 layers (all of them) |
|
- Next 8 layers: Replicate 4 layers (1st, 3rd, 5th, 7th) |
|
- Next 8 layers: Replicate 2 layers (first one, middle one) |
|
- First 8 layers: No replication |
|
|
|
I used this string to visualize it, where 0 are original layers and 1 duplicated ones (the order doesn't matter): |
|
``` |
|
00000000 1000010000 100100100100 1010101010101010 1010101010101010 100100100100 1000010000 00000000 |
|
``` |
|
|
|
The main idea is that the input/output difference of middle layers is quite small, so replicating a middle layer has a small impact on the output. |
|
The additional layers are designed to increase the model's capacity without breaking the information flow, which often creates "insane" self-merges. |
|
|
|
## 🧩 Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
# First 8 layers: No replication |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [0, 8] |
|
|
|
# Next 8 layers: Replicate 2 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [8, 9] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [8, 9] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [9, 13] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [13, 14] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [13, 14] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [14, 16] |
|
|
|
# Next 8 layers: Replicate 4 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [16, 18] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [17, 19] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [18, 20] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [19, 21] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [20, 22] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [21, 23] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [22, 24] |
|
|
|
# Next 8 layers: Replicate all 8 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [24, 25] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [24, 26] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [25, 27] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [26, 28] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [27, 29] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [28, 30] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [29, 31] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [30, 32] |
|
|
|
# Middle 8 layers: Replicate all 8 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [32, 33] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [32, 34] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [33, 35] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [34, 36] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [35, 37] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [36, 38] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [37, 39] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [38, 40] |
|
|
|
# Next 8 layers: Replicate 4 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [40, 42] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [41, 43] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [42, 44] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [43, 45] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [44, 46] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [45, 47] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [46, 48] |
|
|
|
# Next 8 layers: Replicate 2 layers |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [48, 49] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [48, 49] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [49, 53] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [53, 54] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [53, 54] |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [54, 56] |
|
|
|
# Last 8 layers: No replication |
|
- sources: |
|
- model: Qwen/Qwen2.5-32B-Instruct |
|
layer_range: [56, 64] |
|
|
|
merge_method: passthrough |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/BigQwen2.5-Echo-47B-Instruct" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__BigQwen2.5-Echo-47B-Instruct) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |30.31| |
|
|IFEval (0-Shot) |73.57| |
|
|BBH (3-Shot) |44.52| |
|
|MATH Lvl 5 (4-Shot)| 3.47| |
|
|GPQA (0-shot) | 8.61| |
|
|MuSR (0-shot) |10.19| |
|
|MMLU-PRO (5-shot) |41.49| |
|
|
|
|