lapp0 commited on
Commit
739061a
1 Parent(s): 5de44fd

End of training

Browse files
README.md CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
- - eval_enwikippl: 527.0228
20
- - eval_frwikippl: 3796.0032
21
- - eval_zhwikippl: 4795.4683
22
- - eval_loss: 2376.6721
23
- - eval_runtime: 21.817
24
- - eval_samples_per_second: 45.836
25
- - eval_steps_per_second: 11.459
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - distillation_objective: LinearObjective(logits_weight=1, logits_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, activations_weight=1, activations_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, attentions_weight=0, attentions_loss_fn=<function mse_loss at 0x7f57c4b07880>)
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 4
@@ -64,20 +64,20 @@ Peak GPU Memory: 4.5067 GB
64
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
65
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
66
  | **teacher eval** | | 30.2385 | 57.2728 | | | | | 18.1772 |
67
- | 0 | 0 | 55339.3672 | 57682.5742 | 31197.1836 | 21.7082 | 46.065 | 11.516 | 57080.2930 |
68
- | 500 | 0.0808 | 1509.5735 | 7497.0439 | 3194.9919 | 21.4587 | 46.601 | 11.65 | 50589.3438 |
69
- | 1000 | 0.1616 | 1083.2607 | 5620.3037 | 2923.7439 | 21.5879 | 46.322 | 11.581 | 29616.2285 |
70
- | 1500 | 0.2424 | 906.6083 | 4937.0078 | 2796.2080 | 21.6636 | 46.16 | 11.54 | 21403.5996 |
71
- | 2000 | 0.3232 | 813.4678 | 4877.3267 | 2706.0481 | 21.5303 | 46.446 | 11.612 | 20010.4863 |
72
- | 2500 | 0.4040 | 750.0352 | 4512.8765 | 2636.6079 | 21.6059 | 46.284 | 11.571 | 16546.3457 |
73
- | 3000 | 0.4848 | 704.7218 | 4373.6377 | 2583.7920 | 21.6069 | 46.281 | 11.57 | 14758.0859 |
74
- | 3500 | 0.5657 | 667.2821 | 4153.7866 | 2537.5520 | 21.59 | 46.318 | 11.579 | 14131.2881 |
75
- | 4000 | 0.6465 | 635.3494 | 4060.9749 | 2505.6001 | 21.554 | 46.395 | 11.599 | 13081.5996 |
76
- | 4500 | 0.7273 | 605.6495 | 4037.2766 | 2468.9121 | 21.795 | 45.882 | 11.471 | 11453.9658 |
77
- | 5000 | 0.8081 | 573.4954 | 3881.2524 | 2437.7439 | 21.6801 | 46.125 | 11.531 | 8931.2441 |
78
- | 5500 | 0.8889 | 557.2740 | 3918.3730 | 2413.4880 | 21.5054 | 46.5 | 11.625 | 6643.0454 |
79
- | 6000 | 0.9697 | 549.7523 | 4035.1443 | 2392.2400 | 21.6194 | 46.255 | 11.564 | 5330.4404 |
80
- | 6187 | 0.9999 | 527.0228 | 3796.0032 | 2376.6721 | 21.817 | 45.836 | 11.459 | 4795.4683 |
81
 
82
  ### Framework versions
83
  - Distily 0.2.0
 
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 524.7870
20
+ - eval_frwikippl: 3705.5625
21
+ - eval_zhwikippl: 6035.2861
22
+ - eval_loss: 2370.7361
23
+ - eval_runtime: 21.6322
24
+ - eval_samples_per_second: 46.227
25
+ - eval_steps_per_second: 11.557
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - distillation_objective: LinearObjective(logits_weight=1, logits_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, activations_weight=10, activations_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, attentions_weight=0, attentions_loss_fn=<function mse_loss at 0x7f57c4b07880>)
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 4
 
64
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
65
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
66
  | **teacher eval** | | 30.2385 | 57.2728 | | | | | 18.1772 |
67
+ | 0 | 0 | 55339.3672 | 57682.5742 | 31197.1836 | 21.4398 | 46.642 | 11.661 | 57080.2930 |
68
+ | 500 | 0.0808 | 1545.6934 | 7685.4297 | 3209.9360 | 21.4847 | 46.545 | 11.636 | 63830.4023 |
69
+ | 1000 | 0.1616 | 1108.6847 | 5659.8701 | 2933.1360 | 21.4559 | 46.607 | 11.652 | 31166.1797 |
70
+ | 1500 | 0.2424 | 913.3565 | 4893.8623 | 2798.0161 | 21.5956 | 46.306 | 11.576 | 23215.4258 |
71
+ | 2000 | 0.3232 | 813.5310 | 4763.6436 | 2700.0161 | 21.635 | 46.221 | 11.555 | 22568.9238 |
72
+ | 2500 | 0.4040 | 747.3608 | 4565.6851 | 2631.0720 | 21.5442 | 46.416 | 11.604 | 18090.1602 |
73
+ | 3000 | 0.4848 | 711.6094 | 4255.0127 | 2579.2639 | 21.7116 | 46.058 | 11.515 | 16199.8096 |
74
+ | 3500 | 0.5657 | 666.4665 | 4117.3369 | 2530.9441 | 21.5886 | 46.321 | 11.58 | 16435.1426 |
75
+ | 4000 | 0.6465 | 638.0192 | 4058.8262 | 2500.0801 | 21.4712 | 46.574 | 11.643 | 16069.4648 |
76
+ | 4500 | 0.7273 | 597.0923 | 4013.0125 | 2459.4241 | 21.7093 | 46.063 | 11.516 | 12965.0762 |
77
+ | 5000 | 0.8081 | 567.6912 | 3822.9963 | 2424.4800 | 21.5309 | 46.445 | 11.611 | 10275.5850 |
78
+ | 5500 | 0.8889 | 548.5159 | 3864.8674 | 2399.5359 | 21.6408 | 46.209 | 11.552 | 8114.6914 |
79
+ | 6000 | 0.9697 | 539.3817 | 3793.8606 | 2379.3601 | 21.5636 | 46.374 | 11.594 | 6467.9736 |
80
+ | 6187 | 0.9999 | 524.7870 | 3705.5625 | 2370.7361 | 21.6322 | 46.227 | 11.557 | 6035.2861 |
81
 
82
  ### Framework versions
83
  - Distily 0.2.0
logs/distillation_objective=LinearObjective_logits_weight_1__logits_loss_fn__function_kl_divergence_loss_at_0x7f57c4b07910___activations_weight_10__activations_loss_fn__function_kl_divergence_loss_at_0x7f5/events.out.tfevents.1723375249.715f24f8d8b8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61650b3cc0400f106f040a081f72ed842b1a6f3bf6934a754d4eec721bb16797
3
+ size 249