alexmarques
commited on
Commit
•
5299637
1
Parent(s):
03f1419
Update README.md
Browse files
README.md
CHANGED
@@ -49,17 +49,20 @@ Model evaluation metrics and results.
|
|
49 |
|
50 |
| Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
|
51 |
|------------------------------------------------|---------------|-------------|-------------------------------|
|
52 |
-
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot
|
53 |
-
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |
|
54 |
-
| [WinoGrande](https://arxiv.org/abs/1907.10641) |
|
55 |
-
| [ARC-c](https://arxiv.org/abs/1911.01547) |
|
56 |
-
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot |
|
57 |
-
| [
|
58 |
-
| [
|
|
|
|
|
59 |
|
60 |
## Model Training Details
|
61 |
|
62 |
-
|
|
|
63 |
|
64 |
## Help
|
65 |
|
|
|
49 |
|
50 |
| Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
|
51 |
|------------------------------------------------|---------------|-------------|-------------------------------|
|
52 |
+
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.1% | 41.4% |
|
53 |
+
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 75.9% | 73.5% |
|
54 |
+
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 72.6% | 67.8% |
|
55 |
+
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 52.8% | 49.0% |
|
56 |
+
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 44.8% | 39.5% |
|
57 |
+
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 12.4% | 8.0% |
|
58 |
+
| [AlpacaEval](https://arxiv.org/abs/2107.03374) ([Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) evaluator) | Win rate | 57.6% | 60.1% |
|
59 |
+
| [AlpacaEval](https://arxiv.org/abs/2107.03374) (GPT-4 Turbo evaluator) | Win rate | 60.6% | 59.0% |
|
60 |
+
|
61 |
|
62 |
## Model Training Details
|
63 |
|
64 |
+
This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
|
65 |
+
Training was perfomerd for 2 epochs and used the [SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation with [Llama-2-7b-ultrachat](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat) as teacher.
|
66 |
|
67 |
## Help
|
68 |
|