lapp0 commited on
Commit
29c575f
1 Parent(s): f926064

End of training

Browse files
README.md CHANGED
@@ -16,14 +16,14 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
- - eval_enwikippl: 1248.0
20
- - eval_frwikippl: 6560.0
21
- - eval_zhwikippl: 24448.0
22
- - eval_tinystoriesppl: 968.0
23
- - eval_loss: 2.4413
24
- - eval_runtime: 12.6655
25
- - eval_samples_per_second: 47.373
26
- - eval_steps_per_second: 11.843
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment.
@@ -46,7 +46,7 @@ More information needed
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
- - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
50
  - train_embeddings: True
51
  - learning_rate: 0.0001
52
  - train_batch_size: 8
@@ -58,23 +58,23 @@ The following hyperparameters were used during training:
58
  - num_epochs: 1.0
59
 
60
  ### Resource Usage
61
- Peak GPU Memory: 7.9388 GB
62
 
63
  ### Eval-Phase Metrics
64
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
65
  | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
66
  | **teacher eval** | | 43.75 | 61.75 | | | | | 11.8125 | 19.125 |
67
- | 0 | 0 | 1065151889408.0 | 117097988358144.0 | 20.4033 | 12.6433 | 47.456 | 11.864 | 4362076160.0 | 27762668601344.0 |
68
- | 750 | 0.1010 | 1248.0 | 6560.0 | 2.4413 | 12.6655 | 47.373 | 11.843 | 968.0 | 24448.0 |
69
- | 1500 | 0.2020 | 500.0 | 3712.0 | 1.8019 | 12.6495 | 47.433 | 11.858 | 352.0 | 820.0 |
70
- | 2250 | 0.3030 | 342.0 | 1832.0 | 1.5659 | 12.681 | 47.315 | 11.829 | 276.0 | 294.0 |
71
- | 3000 | 0.4040 | 245.0 | 932.0 | 1.3644 | 12.6925 | 47.272 | 11.818 | 198.0 | 228.0 |
72
- | 3750 | 0.5051 | 191.0 | 680.0 | 1.2038 | 12.6837 | 47.305 | 11.826 | 159.0 | 219.0 |
73
- | 4500 | 0.6061 | 148.0 | 596.0 | 1.0541 | 12.6874 | 47.291 | 11.823 | 127.5 | 188.0 |
74
- | 5250 | 0.7071 | 129.0 | 442.0 | 0.9331 | 12.6692 | 47.359 | 11.84 | 102.5 | 126.5 |
75
- | 6000 | 0.8081 | 117.0 | 412.0 | 0.8757 | 12.6912 | 47.277 | 11.819 | 94.5 | 124.5 |
76
- | 6750 | 0.9091 | 112.0 | 398.0 | 0.8483 | 12.8574 | 46.666 | 11.666 | 90.0 | 119.5 |
77
- | 7425 | 1.0 | 110.5 | 392.0 | 0.8433 | 12.717 | 47.181 | 11.795 | 89.0 | 119.0 |
78
 
79
  ### Framework versions
80
  - Distily 0.2.0
 
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 2160.0
20
+ - eval_frwikippl: 9536.0
21
+ - eval_zhwikippl: 98816.0
22
+ - eval_tinystoriesppl: 1960.0
23
+ - eval_loss: 3.2783
24
+ - eval_runtime: 12.9016
25
+ - eval_samples_per_second: 46.506
26
+ - eval_steps_per_second: 11.626
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment.
 
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=1.0, loss_fn=kl, layer_mapper=all, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
50
  - train_embeddings: True
51
  - learning_rate: 0.0001
52
  - train_batch_size: 8
 
58
  - num_epochs: 1.0
59
 
60
  ### Resource Usage
61
+ Peak GPU Memory: 8.0905 GB
62
 
63
  ### Eval-Phase Metrics
64
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
65
  | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
66
  | **teacher eval** | | 43.75 | 61.75 | | | | | 11.8125 | 19.125 |
67
+ | 0 | 0 | 1855425871872.0 | 61297773248512.0 | 26.3692 | 12.8695 | 46.622 | 11.655 | 14495514624.0 | 11338713661440.0 |
68
+ | 750 | 0.1010 | 2160.0 | 9536.0 | 3.2783 | 12.9016 | 46.506 | 11.626 | 1960.0 | 98816.0 |
69
+ | 1500 | 0.2020 | 740.0 | 4640.0 | 2.2764 | 12.9025 | 46.503 | 11.626 | 612.0 | 9728.0 |
70
+ | 2250 | 0.3030 | 450.0 | 2656.0 | 1.9411 | 12.9079 | 46.483 | 11.621 | 344.0 | 564.0 |
71
+ | 3000 | 0.4040 | 312.0 | 1544.0 | 1.6865 | 12.9248 | 46.422 | 11.606 | 272.0 | 304.0 |
72
+ | 3750 | 0.5051 | 237.0 | 964.0 | 1.4813 | 12.9469 | 46.343 | 11.586 | 199.0 | 252.0 |
73
+ | 4500 | 0.6061 | 185.0 | 712.0 | 1.2780 | 13.0095 | 46.12 | 11.53 | 144.0 | 254.0 |
74
+ | 5250 | 0.7071 | 143.0 | 520.0 | 1.1058 | 13.0283 | 46.054 | 11.513 | 114.5 | 212.0 |
75
+ | 6000 | 0.8081 | 130.0 | 478.0 | 1.0297 | 12.9977 | 46.162 | 11.541 | 103.0 | 180.0 |
76
+ | 6750 | 0.9091 | 123.5 | 442.0 | 0.9899 | 12.9475 | 46.341 | 11.585 | 99.0 | 160.0 |
77
+ | 7425 | 1.0 | 122.5 | 438.0 | 0.9819 | 12.9461 | 46.346 | 11.587 | 97.5 | 160.0 |
78
 
79
  ### Framework versions
80
  - Distily 0.2.0
logs/hs_layer_mapper=all, hs_loss_fn=kl, hs_weight=1.0/events.out.tfevents.1724092951.f383272e719b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5751aee29f11ef263fb2e003455bb13e2c840db1437c1257448b4b4eff93b788
3
+ size 578