mlabonne commited on
Commit
618a0a1
β€’
1 Parent(s): 94b6900

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - fblgit/UNA-TheBeagle-7b-v1
8
+ - argilla/distilabeled-Marcoro14-7B-slerp
9
+ - dpo
10
+ - rlhf
11
+ ---
12
+
13
+ ![](https://i.imgur.com/89ZAKcn.png)
14
+
15
+ # NeuralBeagle14-7B
16
+
17
+ **Update 01/16/24: NeuralBeagle14-7B is probably the best 7B model you can find. πŸŽ‰**
18
+
19
+ NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) using the [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset and my DPO notebook from [this article](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac).
20
+
21
+ Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). πŸ’ͺ
22
+
23
+ ## πŸ† Evaluation
24
+
25
+ The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
26
+
27
+ | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
28
+ |---|---:|---:|---:|---:|---:|
29
+ | [**mlabonne/NeuralBeagle14-7B**](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [πŸ“„](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | **60.25** | **46.06** | **76.77** | **70.32** | **47.86** |
30
+ | [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) [πŸ“„](https://gist.github.com/mlabonne/f5a5bf8c0827bbec2f05b97cc62d642c) | 59.4 | 44.38 | 76.53 | 69.44 | 47.25 |
31
+ | [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [πŸ“„](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
32
+ | [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp) [πŸ“„](https://gist.github.com/mlabonne/9082c4e59f4d3f3543c5eda3f4807040) | 58.93 | 45.38 | 76.48 | 65.68 | 48.18 |
33
+ | [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) [πŸ“„](https://gist.github.com/mlabonne/b31572a4711c945a4827e7242cfc4b9d) | 58.4 | 44.59 | 76.17 | 65.94 | 46.9 |
34
+ | [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) [πŸ“„](https://gist.github.com/mlabonne/1afab87b543b0717ec08722cf086dcc3) | 53.71 | 44.17 | 73.72 | 52.53 | 44.4 |
35
+ | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [πŸ“„](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
36
+
37
+ You can find the complete benchmark on [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
38
+
39
+ It's also on top of the Open LLM Leaderboard:
40
+
41
+ ![](https://i.imgur.com/62gUTFn.png)
42
+
43
+ Compared to Beagle14, there's no improvement in this benchmark. This might be due to an unlucky run, but I think I might be overexploiting argilla/distilabel-intel-orca-dpo-pairs at this point. Another preference dataset could improve it even further. Note that the Beagle models perform better than Turdus, which is purposely contaminated on Winogrande (very high score).
44
+
45
+ ## πŸ’» Usage
46
+
47
+ ```python
48
+ !pip install -qU transformers accelerate
49
+
50
+ from transformers import AutoTokenizer
51
+ import transformers
52
+ import torch
53
+
54
+ model = "mlabonne/NeuralBeagle14-7B"
55
+ messages = [{"role": "user", "content": "What is a large language model?"}]
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model)
58
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
59
+ pipeline = transformers.pipeline(
60
+ "text-generation",
61
+ model=model,
62
+ torch_dtype=torch.float16,
63
+ device_map="auto",
64
+ )
65
+
66
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
67
+ print(outputs[0]["generated_text"])
68
+ ```
69
+
70
+ <p align="center">
71
+ <a href="https://github.com/argilla-io/distilabel">
72
+ <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
73
+ </a>
74
+ </p>