lapp0 commited on
Commit
cb04b21
1 Parent(s): b60d80f

End of training

Browse files
Files changed (14) hide show
  1. README.md +14 -13
  2. benchmarks.shelve.bak +1 -0
  3. benchmarks.shelve.dat +0 -0
  4. benchmarks.shelve.dir +1 -0
  5. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  6. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=None, dataset_uri=distily_filtered_redpajama_en, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  7. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  8. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  9. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  10. logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  11. logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727589945.1c1a426a2fee +3 -0
  12. logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  13. logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee +3 -0
  14. tokenizer.json +2 -14
README.md CHANGED
@@ -82,16 +82,17 @@ LlamaForCausalLM(
82
  - student 4: `dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8`
83
  - student 5: `dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8`
84
  - student 6: `dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8`
85
-
86
- | Metric | teacher | student 0 | student 1 | student 2 | student 3 | student 4 | student 5 | student 6 |
87
- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
88
- | tinyArc.acc_norm,none | 0.37 | 0.303 | 0.295 | 0.302 | 0.26 | 0.269 | **0.319** | 0.286 |
89
- | tinyGSM8k.exact_match,flexible-extract | 0.006 | 0.029 | **0.03** | 0.025 | 0.006 | 0.006 | 0.012 | 0.012 |
90
- | tinyGSM8k.exact_match,strict-match | 0.006 | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** |
91
- | tinyHellaswag.acc_norm,none | 0.452 | 0.341 | 0.281 | 0.327 | 0.3 | 0.303 | 0.301 | **0.364** |
92
- | tinyMMLU.acc_norm,none | 0.341 | 0.276 | 0.281 | **0.31** | 0.286 | 0.279 | 0.292 | 0.295 |
93
- | tinyTruthfulQA.acc,none | 0.38 | **0.463** | 0.447 | 0.423 | 0.419 | 0.421 | 0.427 | 0.44 |
94
- | tinyWinogrande.acc_norm,none | 0.509 | 0.466 | 0.436 | 0.46 | **0.492** | 0.473 | 0.417 | 0.439 |
 
95
 
96
  # Resource Usage
97
 
@@ -154,7 +155,7 @@ LlamaForCausalLM(
154
  <br/>
155
 
156
  # Train Dataset
157
- Trained on 1,857,293,914 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
158
 
159
  - Num Samples: `3,992,000`
160
  - Subset: `20231101.en`
@@ -184,7 +185,7 @@ The following hyperparameters were used during training:
184
  <details>
185
  <summary>Expand</summary>
186
 
187
- - learning_rate: `0.0001`
188
  - train_batch_size: `8`
189
  - eval_batch_size: `4`
190
  - seed: `42`
@@ -204,7 +205,7 @@ The following hyperparameters were used during training:
204
  weight=0
205
  )
206
  )`
207
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x766de39d92d0>`
208
  - student_model_name_or_path: `None`
209
  - student_config_name_or_path: `None`
210
  - student_model_config: `{'num_hidden_layers': 15}`
 
82
  - student 4: `dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8`
83
  - student 5: `dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8`
84
  - student 6: `dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8`
85
+ - student 7: `dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8`
86
+
87
+ | Metric | teacher | student 0 | student 1 | student 2 | student 3 | student 4 | student 5 | student 6 | student 7 |
88
+ | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
89
+ | tinyArc.acc_norm,none | 0.37 | 0.303 | 0.295 | 0.302 | 0.26 | 0.269 | **0.319** | 0.286 | 0.299 |
90
+ | tinyGSM8k.exact_match,flexible-extract | 0.006 | 0.029 | **0.03** | 0.025 | 0.006 | 0.006 | 0.012 | 0.012 | 0.017 |
91
+ | tinyGSM8k.exact_match,strict-match | 0.006 | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** | **0.006** |
92
+ | tinyHellaswag.acc_norm,none | 0.452 | 0.341 | 0.281 | 0.327 | 0.3 | 0.303 | 0.301 | **0.364** | 0.356 |
93
+ | tinyMMLU.acc_norm,none | 0.341 | 0.276 | 0.281 | 0.31 | 0.286 | 0.279 | 0.292 | 0.295 | **0.328** |
94
+ | tinyTruthfulQA.acc,none | 0.38 | **0.463** | 0.447 | 0.423 | 0.419 | 0.421 | 0.427 | 0.44 | 0.436 |
95
+ | tinyWinogrande.acc_norm,none | 0.509 | 0.466 | 0.436 | 0.46 | **0.492** | 0.473 | 0.417 | 0.439 | 0.482 |
96
 
97
  # Resource Usage
98
 
 
155
  <br/>
156
 
157
  # Train Dataset
158
+ Trained on 1,857,304,596 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
159
 
160
  - Num Samples: `3,992,000`
161
  - Subset: `20231101.en`
 
185
  <details>
186
  <summary>Expand</summary>
187
 
188
+ - learning_rate: `6e-05`
189
  - train_batch_size: `8`
190
  - eval_batch_size: `4`
191
  - seed: `42`
 
205
  weight=0
206
  )
207
  )`
208
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7d28e0fc5450>`
209
  - student_model_name_or_path: `None`
210
  - student_config_name_or_path: `None`
211
  - student_model_config: `{'num_hidden_layers': 15}`
benchmarks.shelve.bak CHANGED
@@ -6,3 +6,4 @@
6
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8', (2560, 448)
7
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8', (3072, 448)
8
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8', (3584, 448)
 
 
6
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8', (2560, 448)
7
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8', (3072, 448)
8
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8', (3584, 448)
9
+ 'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8', (4096, 448)
benchmarks.shelve.dat CHANGED
Binary files a/benchmarks.shelve.dat and b/benchmarks.shelve.dat differ
 
benchmarks.shelve.dir CHANGED
@@ -6,3 +6,4 @@
6
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8', (2560, 448)
7
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8', (3072, 448)
8
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8', (3584, 448)
 
 
6
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8', (2560, 448)
7
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8', (3072, 448)
8
  'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8', (3584, 448)
9
+ 'distily_smollm_dataset_sweep/logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8', (4096, 448)
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:888bee9f32b62227adff38516ef5d177b16366546c22a21838014ff9082be9df
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=None, dataset_uri=distily_filtered_redpajama_en, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:749317bf2e97cefc56b2ce557f4da78e27be8ef423e6fa2b8f62ef17db1d9ae2
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8391fa5a07219420e01ce5e17e4864cddc2dc51390aad11ddaf55c18c03987
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7c784334da2e5330d6b31b23bdc3318d10d36be46cc18613ec21d09d0d7430e
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71859bc8d5596c682042fdce47acf6e30501a7eda1fdbb6ec66b2979d278770a
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=1000000, dataset_subset=sample-10BT, dataset_uri=HuggingFaceFW_fineweb-edu, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d28632bbc2530af957f7310b44d47d8d87771621e14a80851c2d16aca1c10e2
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727589945.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6919a2af5b5c5aa359def522879a3d50a5dcd6c07600674d9e92e0a5138312c3
3
+ size 529
logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, learning_rate=6e-05, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a26466734519afa045452e605ff0647d57b21d6dd5f0ad727d9f38e6888a4e6e
3
+ size 562
logs/dataset_max_seq_length=1024, dataset_sample_size=4000000, dataset_subset=20231101.en, dataset_uri=wikimedia_wikipedia, per_device_train_batch_size=8/events.out.tfevents.1727590373.1c1a426a2fee ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a6aab8254ba2789047e74bb6d8bb96ba2de6cef04dbe7595f6336d0ad9d7252
3
+ size 562
tokenizer.json CHANGED
@@ -1,19 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 1023,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": "BatchLongest",
11
- "direction": "Right",
12
- "pad_to_multiple_of": null,
13
- "pad_id": 0,
14
- "pad_type_id": 0,
15
- "pad_token": "<|endoftext|>"
16
- },
17
  "added_tokens": [
18
  {
19
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,