Safetensors
English
llama
Eval Results
mkurman commited on
Commit
5696c9e
1 Parent(s): 86036fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -18,11 +18,14 @@ SmolLM2-MedIT-Upscale-2B is an expanded version of the [SmolLM2-1.7B-Instruct](h
18
 
19
  This model was developed to test the hypothesis that self-attention layers do not extend the "memory" of the model. By broadening the attention layers, we aim to observe the impact on the model's performance and memory capabilities.
20
 
21
- **Training Status**
22
 
 
 
 
23
  The model has undergone preliminary training focused on assessing the effects of the expanded attention layers. It is not fully trained to its maximum potential. We encourage the community to contribute to its further training; pull requests are welcome.
24
 
25
- **Analysis of Expanded Layers**
26
 
27
  During fine-tuning, we analyzed the changes in the new parameters of the expanded layers:
28
 
@@ -49,19 +52,19 @@ These results are illustrated in the following charts:
49
  ![Percentage of Change](percent_of_change.png)
50
  ![Average Parameter Change](mean_difference.png)
51
 
52
- **Usage**
53
 
54
  To utilize this model, follow the instructions provided for the original SmolLM2-1.7B-Instruct model, adjusting for the increased parameter size.
55
 
56
- **Contributing**
57
 
58
  We welcome contributions to further train and evaluate this model. Please submit pull requests with your improvements.
59
 
60
- **License**
61
 
62
  This model is licensed under the Apache 2.0 License.
63
 
64
- **Citation**
65
 
66
  If you use this model in your research, please cite it as follows:
67
 
 
18
 
19
  This model was developed to test the hypothesis that self-attention layers do not extend the "memory" of the model. By broadening the attention layers, we aim to observe the impact on the model's performance and memory capabilities.
20
 
21
+ ## Training Status
22
 
23
+ This model underwent instruction fine-tuning for 8,800 steps using a batch size of 4, gradient accumulation for 32 steps, a maximum sequence length of 1,280, and a learning rate of 1e-5. Additionally, it was fine-tuned with 1,600 steps of DPO under the same configuration.
24
+
25
+ **Note**:
26
  The model has undergone preliminary training focused on assessing the effects of the expanded attention layers. It is not fully trained to its maximum potential. We encourage the community to contribute to its further training; pull requests are welcome.
27
 
28
+ ## Analysis of Expanded Layers
29
 
30
  During fine-tuning, we analyzed the changes in the new parameters of the expanded layers:
31
 
 
52
  ![Percentage of Change](percent_of_change.png)
53
  ![Average Parameter Change](mean_difference.png)
54
 
55
+ ## Usage
56
 
57
  To utilize this model, follow the instructions provided for the original SmolLM2-1.7B-Instruct model, adjusting for the increased parameter size.
58
 
59
+ ## Contributing
60
 
61
  We welcome contributions to further train and evaluate this model. Please submit pull requests with your improvements.
62
 
63
+ ## License
64
 
65
  This model is licensed under the Apache 2.0 License.
66
 
67
+ ## Citation
68
 
69
  If you use this model in your research, please cite it as follows:
70