JunxiongWang commited on
Commit
abfb5d3
1 Parent(s): 0c247b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -64
README.md CHANGED
@@ -1,67 +1,27 @@
1
  ---
2
- license: llama3.1
3
- base_model: meta-llama/Llama-3.1-8B-Instruct
4
- tags:
5
- - alignment-handbook
6
- - generated_from_trainer
7
- datasets:
8
- - JunxiongWang/sftdatasetv3
9
- model-index:
10
- - name: Llama-Mamba-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/junxiong12/huggingface/runs/vtbgj6vc)
18
- # Llama-Mamba-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0
19
-
20
- This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the JunxiongWang/sftdatasetv3 dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 242.9601
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 2
43
- - eval_batch_size: 4
44
- - seed: 42
45
- - distributed_type: multi-GPU
46
- - num_devices: 8
47
- - gradient_accumulation_steps: 4
48
- - total_train_batch_size: 64
49
- - total_eval_batch_size: 32
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: cosine
52
- - lr_scheduler_warmup_ratio: 0.01
53
- - num_epochs: 1
54
-
55
- ### Training results
56
-
57
- | Training Loss | Epoch | Step | Validation Loss |
58
- |:-------------:|:------:|:-----:|:---------------:|
59
- | 220.5766 | 1.0000 | 51995 | 242.9601 |
60
-
61
-
62
- ### Framework versions
63
-
64
- - Transformers 4.43.1
65
- - Pytorch 2.1.1+cu118
66
- - Datasets 2.20.0
67
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model
6
+
7
+ | Task | Llama-3.1-8B-Instruct | Llama3.1-Mamba-8B-distill | Llama3.1-Mamba-8B-dpo | Llama3.1-Mamba2-8B-distill | Llama3.1-Mamba-8B-dpo |
8
+ |---------------------|-----------------------|--------------------------|-----------------------|---------------------------|-----------------------|
9
+ | arc_challenge | 0.552 | 0.5384 | 0.5657 | 0.5265 | 0.5973 |
10
+ | arc_easy | 0.8178 | 0.8224 | 0.8401 | 0.822 | 0.8481 |
11
+ | hellaswag | 0.7921 | 0.7591 | 0.7736 | 0.7536 | 0.7969 |
12
+ | mmlu (0 shot) | 0.6812 | 0.6213 | 0.636 | 0.6101 | 0.5974 |
13
+ | openbookqa | 0.432 | 0.428 | 0.442 | 0.416 | 0.44 |
14
+ | piqa | 0.8079 | 0.7933 | 0.8041 | 0.7889 | 0.8003 |
15
+ | pubmedqa | 0.752 | 0.72 | 0.744 | 0.726 | 0.746 |
16
+ | race | 0.4478 | 0.4211 | 0.4344 | 0.4211 | 0.4612 |
17
+ | winogrande | 0.7388 | 0.7277 | 0.738 | 0.7174 | 0.7411 |
18
+ | truthful | 0.4267 | 0.4002 | 0.4607 | 0.4031 | 0.5022 |
19
+
20
+ ```
21
+ @article{junxiongdaniele2024mambainllama,
22
+ title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models},
23
+ author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao},
24
+ journal = {arXiv preprint arXiv:2408.15237},
25
+ year = {2024}
26
+ }
27
+ ```