File size: 5,374 Bytes
b3e5549
e8789f1
 
 
 
 
 
 
 
 
 
 
 
b3e5549
 
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
 
 
 
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
 
 
 
 
 
 
 
 
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
b3e5549
e8789f1
 
 
 
 
 
 
 
 
b3e5549
e8789f1
 
 
 
 
 
 
 
b3e5549
e8789f1
 
 
 
 
 
 
 
 
b3e5549
e8789f1
b3e5549
e8789f1
 
b3e5549
e8789f1
 
 
b3e5549
e8789f1
 
b3e5549
e8789f1
 
 
 
 
 
 
 
b3e5549
e8789f1
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: cc-by-nc-4.0
tags:
- merge
- lazymergekit
dataset:
- mlabonne/truthy-dpo-v0.1
- mlabonne/distilabel-intel-orca-dpo-pairs
- mlabonne/distilabel-intel-orca-dpo-pairs
base_model:
- mlabonne/NeuralMonarch-7B
language:
- en
---

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/TI7C8F2gk43gmI9U2L0uk.jpeg)

# πŸ‘‘ AlphaMonarch-7B

**Update 14/02/24: AlphaMonarch-7B is the new best-performing 7B model on Nous' benchmark suite! πŸŽ‰**

AlphaMonarch-7B is a DPO fine-tuned of [mlabonne/NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B/) using the [argilla/OpenHermes2.5-dpo-binarized-alpha](https://huggingface.co/datasets/argilla/OpenHermes2.5-dpo-binarized-alpha) preference dataset.

It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)

Special thanks to [Jon Durbin](https://huggingface.co/jondurbin), [Intel](https://huggingface.co/Intel), and [Argilla](https://huggingface.co/argilla) for the preference datasets.

## πŸ” Applications

This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template.

Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.

## ⚑ Quantized models

* **GGUF**: https://huggingface.co/mlabonne/NeuralMonarch-7B-GGUF

## πŸ† Evaluation

### Nous

The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).

| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**NeuralMonarch-7B**](https://huggingface.co/mlabonne/NeuralMonarch-7B) [πŸ“„](https://gist.github.com/mlabonne/64050c96c6aa261a8f5b403190c8dee4) | **62.73** | **45.31** | **76.99** | **78.35** | **50.28** |
| [Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) [πŸ“„](https://gist.github.com/mlabonne/0b8d057c5ece41e0290580a108c7a093) | 62.68 | 45.48 | 77.07 | 78.04 | 50.14 |
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [πŸ“„](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [πŸ“„](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
| [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [πŸ“„](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 |
| [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B) [πŸ“„](https://gist.github.com/CultriX-Github/dbdde67ead233df0c7c56f1b091f728c) | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 |
| [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) [πŸ“„](https://gist.github.com/CultriX-Github/df0502599867d4043b45d9dafb5976e8) | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 |

### Open LLM Leaderboard

AlphaMonarch-7B is one of the best-performing non-merge 7B models on the Open LLM Leaderboard:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/njHxX_ERQaBssHqp17fMy.png)

### MT-Bench

```
########## First turn ##########
                                    score
model                       turn         
gpt-4                       1     8.95625
AlphaMonarch-7B             1     8.23750
claude-v1                   1     8.15000
gpt-3.5-turbo               1     8.07500
claude-instant-v1           1     7.80000

########## Second turn ##########
                                     score
model                       turn          
gpt-4                       2     9.025000
claude-instant-v1           2     8.012658
gpt-3.5-turbo               2     7.812500
claude-v1                   2     7.650000
AlphaMonarch-7B             2     7.618750

########## Average ##########
                                score
model                                
gpt-4                        8.990625
gpt-3.5-turbo                7.943750
AlphaMonarch-7B              7.928125
claude-instant-v1            7.905660
claude-v1                    7.900000
```

## πŸ’» Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/MonarchMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```