yukiontheiceberg
commited on
Commit
•
e2619a8
1
Parent(s):
381a799
Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
We encountered two major loss spikes while [training K2](https://huggingface.co/LLM360/K2).
|
6 |
* The first loss spike occurred after 160 checkpoints and lasted over ~34 checkpoints. We restarted training at checkpoint 160 and training returned to normal.
|
7 |
* The [second loss spike](https://huggingface.co/LLM360/K2-Spike-2/) occurred after restarting training to fix the first loss spike at checkpoint 186 and lasted from ~8 checkpoints.
|
8 |
-
*
|
9 |
|
10 |
We are releasing these checkpoints so others can study this interesting phenomena in large model training.
|
11 |
|
|
|
5 |
We encountered two major loss spikes while [training K2](https://huggingface.co/LLM360/K2).
|
6 |
* The first loss spike occurred after 160 checkpoints and lasted over ~34 checkpoints. We restarted training at checkpoint 160 and training returned to normal.
|
7 |
* The [second loss spike](https://huggingface.co/LLM360/K2-Spike-2/) occurred after restarting training to fix the first loss spike at checkpoint 186 and lasted from ~8 checkpoints.
|
8 |
+
* For every spike checkpoint, we also uploaded the corresponding normal checkpoint for easy comparison. You could find different checkpoints in different branches.
|
9 |
|
10 |
We are releasing these checkpoints so others can study this interesting phenomena in large model training.
|
11 |
|