Update README.md
Browse files
README.md
CHANGED
@@ -49,6 +49,8 @@ During the incremental training process, we used 160 A100s with a total of 40GB
|
|
49 |
Throughout the training process, we encountered various issues such as machine crashes, underlying framework bugs, and loss spikes. However, we ensured the stability of the incremental training by making rapid adjustments. We have also released the loss curve during the training process to help everyone understand the potential issues that may arise.
|
50 |
|
51 |
<img src="https://huggingface.co/datasets/suolyer/testb/resolve/main/loss.png" width=1000 height=600>
|
|
|
|
|
52 |
### 多任务有监督微调 Supervised finetuning
|
53 |
|
54 |
在多任务有监督微调阶段,采用了课程学习(curiculum learning)和增量训练(continual learning)的策略,用大模型辅助划分已有的数据难度,然后通过“Easy To Hard”的方式,分多个阶段进行SFT训练。
|
|
|
49 |
Throughout the training process, we encountered various issues such as machine crashes, underlying framework bugs, and loss spikes. However, we ensured the stability of the incremental training by making rapid adjustments. We have also released the loss curve during the training process to help everyone understand the potential issues that may arise.
|
50 |
|
51 |
<img src="https://huggingface.co/datasets/suolyer/testb/resolve/main/loss.png" width=1000 height=600>
|
52 |
+
<iframe src="https://wandb.ai/fengshenbang/llama2_13b_cpt_v1/reports/Untitled-Report--Vmlldzo0OTM3MjQ1" style="border:none;height:1024px;width:100%">
|
53 |
+
|
54 |
### 多任务有监督微调 Supervised finetuning
|
55 |
|
56 |
在多任务有监督微调阶段,采用了课程学习(curiculum learning)和增量训练(continual learning)的策略,用大模型辅助划分已有的数据难度,然后通过“Easy To Hard”的方式,分多个阶段进行SFT训练。
|