Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ During the incremental training process, we used 160 A100s with a total of 40GB
|
|
48 |
|
49 |
Throughout the training process, we encountered various issues such as machine crashes, underlying framework bugs, and loss spikes. However, we ensured the stability of the incremental training by making rapid adjustments. We have also released the loss curve during the training process to help everyone understand the potential issues that may arise.
|
50 |
|
51 |
-
<img src="https://
|
52 |
<iframe src="https://wandb.ai/fengshenbang/llama2_13b_cpt_v1/reports/Untitled-Report--Vmlldzo0OTM3MjQ1" style="border:none;height:1024px;width:100%">
|
53 |
|
54 |
### 多任务有监督微调 Supervised finetuning
|
|
|
48 |
|
49 |
Throughout the training process, we encountered various issues such as machine crashes, underlying framework bugs, and loss spikes. However, we ensured the stability of the incremental training by making rapid adjustments. We have also released the loss curve during the training process to help everyone understand the potential issues that may arise.
|
50 |
|
51 |
+
<img src="https://wandb.ai/fengshenbang/llama2_13b_cpt_v1/reports/Untitled-Report--Vmlldzo0OTM3MjQ1" width=1000 height=600>
|
52 |
<iframe src="https://wandb.ai/fengshenbang/llama2_13b_cpt_v1/reports/Untitled-Report--Vmlldzo0OTM3MjQ1" style="border:none;height:1024px;width:100%">
|
53 |
|
54 |
### 多任务有监督微调 Supervised finetuning
|