End of training
Browse files
README.md
CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
- eval_enwikippl:
|
20 |
-
- eval_frwikippl:
|
21 |
-
- eval_zhwikippl:
|
22 |
-
- eval_loss:
|
23 |
-
- eval_runtime: 21.
|
24 |
-
- eval_samples_per_second:
|
25 |
-
- eval_steps_per_second: 11.
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
@@ -45,7 +45,7 @@ More information needed
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
-
- distillation_objective: LinearObjective(logits_weight=1, logits_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, activations_weight=
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 4
|
@@ -64,20 +64,20 @@ Peak GPU Memory: 4.5067 GB
|
|
64 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
65 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
66 |
| **teacher eval** | | 30.2385 | 57.2728 | | | | | 18.1772 |
|
67 |
-
| 0 | 0 | 55339.3672 | 57682.5742 | 31197.1836 | 21.
|
68 |
-
| 500 | 0.0808 |
|
69 |
-
| 1000 | 0.1616 |
|
70 |
-
| 1500 | 0.2424 |
|
71 |
-
| 2000 | 0.3232 | 813.
|
72 |
-
| 2500 | 0.4040 |
|
73 |
-
| 3000 | 0.4848 |
|
74 |
-
| 3500 | 0.5657 |
|
75 |
-
| 4000 | 0.6465 |
|
76 |
-
| 4500 | 0.7273 |
|
77 |
-
| 5000 | 0.8081 |
|
78 |
-
| 5500 | 0.8889 |
|
79 |
-
| 6000 | 0.9697 |
|
80 |
-
| 6187 | 0.9999 |
|
81 |
|
82 |
### Framework versions
|
83 |
- Distily 0.2.0
|
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_enwikippl: 524.7870
|
20 |
+
- eval_frwikippl: 3705.5625
|
21 |
+
- eval_zhwikippl: 6035.2861
|
22 |
+
- eval_loss: 2370.7361
|
23 |
+
- eval_runtime: 21.6322
|
24 |
+
- eval_samples_per_second: 46.227
|
25 |
+
- eval_steps_per_second: 11.557
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- distillation_objective: LinearObjective(logits_weight=1, logits_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, activations_weight=10, activations_loss_fn=<function kl_divergence_loss at 0x7f57c4b07910>, attentions_weight=0, attentions_loss_fn=<function mse_loss at 0x7f57c4b07880>)
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 4
|
|
|
64 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
65 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
66 |
| **teacher eval** | | 30.2385 | 57.2728 | | | | | 18.1772 |
|
67 |
+
| 0 | 0 | 55339.3672 | 57682.5742 | 31197.1836 | 21.4398 | 46.642 | 11.661 | 57080.2930 |
|
68 |
+
| 500 | 0.0808 | 1545.6934 | 7685.4297 | 3209.9360 | 21.4847 | 46.545 | 11.636 | 63830.4023 |
|
69 |
+
| 1000 | 0.1616 | 1108.6847 | 5659.8701 | 2933.1360 | 21.4559 | 46.607 | 11.652 | 31166.1797 |
|
70 |
+
| 1500 | 0.2424 | 913.3565 | 4893.8623 | 2798.0161 | 21.5956 | 46.306 | 11.576 | 23215.4258 |
|
71 |
+
| 2000 | 0.3232 | 813.5310 | 4763.6436 | 2700.0161 | 21.635 | 46.221 | 11.555 | 22568.9238 |
|
72 |
+
| 2500 | 0.4040 | 747.3608 | 4565.6851 | 2631.0720 | 21.5442 | 46.416 | 11.604 | 18090.1602 |
|
73 |
+
| 3000 | 0.4848 | 711.6094 | 4255.0127 | 2579.2639 | 21.7116 | 46.058 | 11.515 | 16199.8096 |
|
74 |
+
| 3500 | 0.5657 | 666.4665 | 4117.3369 | 2530.9441 | 21.5886 | 46.321 | 11.58 | 16435.1426 |
|
75 |
+
| 4000 | 0.6465 | 638.0192 | 4058.8262 | 2500.0801 | 21.4712 | 46.574 | 11.643 | 16069.4648 |
|
76 |
+
| 4500 | 0.7273 | 597.0923 | 4013.0125 | 2459.4241 | 21.7093 | 46.063 | 11.516 | 12965.0762 |
|
77 |
+
| 5000 | 0.8081 | 567.6912 | 3822.9963 | 2424.4800 | 21.5309 | 46.445 | 11.611 | 10275.5850 |
|
78 |
+
| 5500 | 0.8889 | 548.5159 | 3864.8674 | 2399.5359 | 21.6408 | 46.209 | 11.552 | 8114.6914 |
|
79 |
+
| 6000 | 0.9697 | 539.3817 | 3793.8606 | 2379.3601 | 21.5636 | 46.374 | 11.594 | 6467.9736 |
|
80 |
+
| 6187 | 0.9999 | 524.7870 | 3705.5625 | 2370.7361 | 21.6322 | 46.227 | 11.557 | 6035.2861 |
|
81 |
|
82 |
### Framework versions
|
83 |
- Distily 0.2.0
|
logs/distillation_objective=LinearObjective_logits_weight_1__logits_loss_fn__function_kl_divergence_loss_at_0x7f57c4b07910___activations_weight_10__activations_loss_fn__function_kl_divergence_loss_at_0x7f5/events.out.tfevents.1723375249.715f24f8d8b8
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:61650b3cc0400f106f040a081f72ed842b1a6f3bf6934a754d4eec721bb16797
|
3 |
+
size 249
|