ColPali
Safetensors
English
vidore
vidore-experimental
manu commited on
Commit
669dfe5
1 Parent(s): d1a87d4

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: vidore/colpaligemma2-3b-pt-448-base
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
adapter_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "vidore/colpaligemma2-3b-pt-448-base",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": "gaussian",
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 32,
14
+ "lora_dropout": 0.1,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": "(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)",
23
+ "task_type": "FEATURE_EXTRACTION",
24
+ "use_dora": false,
25
+ "use_rslora": false
26
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:158dbd7a04d7bfc841004b489a80237ce022cc2155988a92c964a337d1ce9172
3
+ size 83279144
checkpoint-4620/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ./models/colpaligemma2-3b-pt-448-base
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
checkpoint-4620/adapter_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "./models/colpaligemma2-3b-pt-448-base",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": "gaussian",
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 32,
14
+ "lora_dropout": 0.1,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 32,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": "(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)",
23
+ "task_type": "FEATURE_EXTRACTION",
24
+ "use_dora": false,
25
+ "use_rslora": false
26
+ }
checkpoint-4620/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:158dbd7a04d7bfc841004b489a80237ce022cc2155988a92c964a337d1ce9172
3
+ size 83279144
checkpoint-4620/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b381975f9ef9a3313a0f3f1b6abc90831d51d8b6faba4b51d09f6754584c910
3
+ size 166753564
checkpoint-4620/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f388ea0b9e292dc1cd004c157a2360717e6b7a214c0abfc6175594ec6d85042f
3
+ size 14244
checkpoint-4620/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee09481cafe062c8fdb56aba6a971908976f16378f639208e512d7beed1f216a
3
+ size 1064
checkpoint-4620/trainer_state.json ADDED
@@ -0,0 +1,3635 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 5.0,
5
+ "eval_steps": 100,
6
+ "global_step": 4620,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.010822510822510822,
13
+ "grad_norm": 101.5,
14
+ "learning_rate": 5e-06,
15
+ "loss": 1.9407,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.021645021645021644,
20
+ "grad_norm": 30.625,
21
+ "learning_rate": 1e-05,
22
+ "loss": 1.5034,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.032467532467532464,
27
+ "grad_norm": 22.125,
28
+ "learning_rate": 1.5e-05,
29
+ "loss": 1.0526,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.04329004329004329,
34
+ "grad_norm": 8.375,
35
+ "learning_rate": 2e-05,
36
+ "loss": 0.9051,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.05411255411255411,
41
+ "grad_norm": 3.75,
42
+ "learning_rate": 2.5e-05,
43
+ "loss": 0.7809,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.06493506493506493,
48
+ "grad_norm": 1.625,
49
+ "learning_rate": 3e-05,
50
+ "loss": 0.7329,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.07575757575757576,
55
+ "grad_norm": 1.96875,
56
+ "learning_rate": 3.5e-05,
57
+ "loss": 0.717,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.08658008658008658,
62
+ "grad_norm": 1.5546875,
63
+ "learning_rate": 4e-05,
64
+ "loss": 0.6936,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.09740259740259741,
69
+ "grad_norm": 3.75,
70
+ "learning_rate": 4.5e-05,
71
+ "loss": 0.664,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.10822510822510822,
76
+ "grad_norm": 3.5,
77
+ "learning_rate": 5e-05,
78
+ "loss": 0.5702,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.10822510822510822,
83
+ "eval_loss": 0.5504737496376038,
84
+ "eval_runtime": 14.4773,
85
+ "eval_samples_per_second": 34.537,
86
+ "eval_steps_per_second": 0.276,
87
+ "step": 100
88
+ },
89
+ {
90
+ "epoch": 0.11904761904761904,
91
+ "grad_norm": 3.125,
92
+ "learning_rate": 4.9889380530973454e-05,
93
+ "loss": 0.4897,
94
+ "step": 110
95
+ },
96
+ {
97
+ "epoch": 0.12987012987012986,
98
+ "grad_norm": 2.96875,
99
+ "learning_rate": 4.9778761061946906e-05,
100
+ "loss": 0.4462,
101
+ "step": 120
102
+ },
103
+ {
104
+ "epoch": 0.1406926406926407,
105
+ "grad_norm": 2.5625,
106
+ "learning_rate": 4.966814159292036e-05,
107
+ "loss": 0.4034,
108
+ "step": 130
109
+ },
110
+ {
111
+ "epoch": 0.15151515151515152,
112
+ "grad_norm": 2.484375,
113
+ "learning_rate": 4.955752212389381e-05,
114
+ "loss": 0.3774,
115
+ "step": 140
116
+ },
117
+ {
118
+ "epoch": 0.16233766233766234,
119
+ "grad_norm": 3.171875,
120
+ "learning_rate": 4.944690265486726e-05,
121
+ "loss": 0.3513,
122
+ "step": 150
123
+ },
124
+ {
125
+ "epoch": 0.17316017316017315,
126
+ "grad_norm": 2.15625,
127
+ "learning_rate": 4.9336283185840707e-05,
128
+ "loss": 0.341,
129
+ "step": 160
130
+ },
131
+ {
132
+ "epoch": 0.18398268398268397,
133
+ "grad_norm": 2.515625,
134
+ "learning_rate": 4.9225663716814165e-05,
135
+ "loss": 0.3309,
136
+ "step": 170
137
+ },
138
+ {
139
+ "epoch": 0.19480519480519481,
140
+ "grad_norm": 1.9375,
141
+ "learning_rate": 4.911504424778761e-05,
142
+ "loss": 0.3161,
143
+ "step": 180
144
+ },
145
+ {
146
+ "epoch": 0.20562770562770563,
147
+ "grad_norm": 1.78125,
148
+ "learning_rate": 4.900442477876107e-05,
149
+ "loss": 0.3011,
150
+ "step": 190
151
+ },
152
+ {
153
+ "epoch": 0.21645021645021645,
154
+ "grad_norm": 1.6640625,
155
+ "learning_rate": 4.8893805309734514e-05,
156
+ "loss": 0.2863,
157
+ "step": 200
158
+ },
159
+ {
160
+ "epoch": 0.21645021645021645,
161
+ "eval_loss": 0.3175188899040222,
162
+ "eval_runtime": 14.5331,
163
+ "eval_samples_per_second": 34.404,
164
+ "eval_steps_per_second": 0.275,
165
+ "step": 200
166
+ },
167
+ {
168
+ "epoch": 0.22727272727272727,
169
+ "grad_norm": 2.140625,
170
+ "learning_rate": 4.8783185840707966e-05,
171
+ "loss": 0.2608,
172
+ "step": 210
173
+ },
174
+ {
175
+ "epoch": 0.23809523809523808,
176
+ "grad_norm": 1.328125,
177
+ "learning_rate": 4.867256637168142e-05,
178
+ "loss": 0.2908,
179
+ "step": 220
180
+ },
181
+ {
182
+ "epoch": 0.24891774891774893,
183
+ "grad_norm": 2.421875,
184
+ "learning_rate": 4.856194690265487e-05,
185
+ "loss": 0.2613,
186
+ "step": 230
187
+ },
188
+ {
189
+ "epoch": 0.2597402597402597,
190
+ "grad_norm": 1.859375,
191
+ "learning_rate": 4.845132743362832e-05,
192
+ "loss": 0.2535,
193
+ "step": 240
194
+ },
195
+ {
196
+ "epoch": 0.27056277056277056,
197
+ "grad_norm": 1.4609375,
198
+ "learning_rate": 4.834070796460177e-05,
199
+ "loss": 0.2738,
200
+ "step": 250
201
+ },
202
+ {
203
+ "epoch": 0.2813852813852814,
204
+ "grad_norm": 2.265625,
205
+ "learning_rate": 4.823008849557522e-05,
206
+ "loss": 0.2413,
207
+ "step": 260
208
+ },
209
+ {
210
+ "epoch": 0.2922077922077922,
211
+ "grad_norm": 1.84375,
212
+ "learning_rate": 4.8119469026548677e-05,
213
+ "loss": 0.2608,
214
+ "step": 270
215
+ },
216
+ {
217
+ "epoch": 0.30303030303030304,
218
+ "grad_norm": 1.28125,
219
+ "learning_rate": 4.800884955752213e-05,
220
+ "loss": 0.2711,
221
+ "step": 280
222
+ },
223
+ {
224
+ "epoch": 0.31385281385281383,
225
+ "grad_norm": 1.453125,
226
+ "learning_rate": 4.789823008849558e-05,
227
+ "loss": 0.2417,
228
+ "step": 290
229
+ },
230
+ {
231
+ "epoch": 0.3246753246753247,
232
+ "grad_norm": 1.59375,
233
+ "learning_rate": 4.778761061946903e-05,
234
+ "loss": 0.267,
235
+ "step": 300
236
+ },
237
+ {
238
+ "epoch": 0.3246753246753247,
239
+ "eval_loss": 0.279142290353775,
240
+ "eval_runtime": 14.1782,
241
+ "eval_samples_per_second": 35.265,
242
+ "eval_steps_per_second": 0.282,
243
+ "step": 300
244
+ },
245
+ {
246
+ "epoch": 0.3354978354978355,
247
+ "grad_norm": 1.40625,
248
+ "learning_rate": 4.767699115044248e-05,
249
+ "loss": 0.2589,
250
+ "step": 310
251
+ },
252
+ {
253
+ "epoch": 0.3463203463203463,
254
+ "grad_norm": 7.9375,
255
+ "learning_rate": 4.7566371681415936e-05,
256
+ "loss": 0.253,
257
+ "step": 320
258
+ },
259
+ {
260
+ "epoch": 0.35714285714285715,
261
+ "grad_norm": 1.921875,
262
+ "learning_rate": 4.745575221238938e-05,
263
+ "loss": 0.2683,
264
+ "step": 330
265
+ },
266
+ {
267
+ "epoch": 0.36796536796536794,
268
+ "grad_norm": 1.3125,
269
+ "learning_rate": 4.734513274336283e-05,
270
+ "loss": 0.2483,
271
+ "step": 340
272
+ },
273
+ {
274
+ "epoch": 0.3787878787878788,
275
+ "grad_norm": 2.5,
276
+ "learning_rate": 4.7234513274336284e-05,
277
+ "loss": 0.2512,
278
+ "step": 350
279
+ },
280
+ {
281
+ "epoch": 0.38961038961038963,
282
+ "grad_norm": 1.421875,
283
+ "learning_rate": 4.7123893805309736e-05,
284
+ "loss": 0.2346,
285
+ "step": 360
286
+ },
287
+ {
288
+ "epoch": 0.4004329004329004,
289
+ "grad_norm": 1.8828125,
290
+ "learning_rate": 4.701327433628319e-05,
291
+ "loss": 0.248,
292
+ "step": 370
293
+ },
294
+ {
295
+ "epoch": 0.41125541125541126,
296
+ "grad_norm": 1.5859375,
297
+ "learning_rate": 4.690265486725664e-05,
298
+ "loss": 0.2663,
299
+ "step": 380
300
+ },
301
+ {
302
+ "epoch": 0.42207792207792205,
303
+ "grad_norm": 1.6171875,
304
+ "learning_rate": 4.679203539823009e-05,
305
+ "loss": 0.24,
306
+ "step": 390
307
+ },
308
+ {
309
+ "epoch": 0.4329004329004329,
310
+ "grad_norm": 1.5,
311
+ "learning_rate": 4.668141592920354e-05,
312
+ "loss": 0.2425,
313
+ "step": 400
314
+ },
315
+ {
316
+ "epoch": 0.4329004329004329,
317
+ "eval_loss": 0.2609393000602722,
318
+ "eval_runtime": 14.168,
319
+ "eval_samples_per_second": 35.291,
320
+ "eval_steps_per_second": 0.282,
321
+ "step": 400
322
+ },
323
+ {
324
+ "epoch": 0.44372294372294374,
325
+ "grad_norm": 2.234375,
326
+ "learning_rate": 4.657079646017699e-05,
327
+ "loss": 0.2377,
328
+ "step": 410
329
+ },
330
+ {
331
+ "epoch": 0.45454545454545453,
332
+ "grad_norm": 1.5625,
333
+ "learning_rate": 4.646017699115045e-05,
334
+ "loss": 0.2552,
335
+ "step": 420
336
+ },
337
+ {
338
+ "epoch": 0.4653679653679654,
339
+ "grad_norm": 2.171875,
340
+ "learning_rate": 4.63495575221239e-05,
341
+ "loss": 0.2185,
342
+ "step": 430
343
+ },
344
+ {
345
+ "epoch": 0.47619047619047616,
346
+ "grad_norm": 1.796875,
347
+ "learning_rate": 4.6238938053097344e-05,
348
+ "loss": 0.2484,
349
+ "step": 440
350
+ },
351
+ {
352
+ "epoch": 0.487012987012987,
353
+ "grad_norm": 1.15625,
354
+ "learning_rate": 4.61283185840708e-05,
355
+ "loss": 0.2421,
356
+ "step": 450
357
+ },
358
+ {
359
+ "epoch": 0.49783549783549785,
360
+ "grad_norm": 1.7265625,
361
+ "learning_rate": 4.601769911504425e-05,
362
+ "loss": 0.2274,
363
+ "step": 460
364
+ },
365
+ {
366
+ "epoch": 0.5086580086580087,
367
+ "grad_norm": 1.828125,
368
+ "learning_rate": 4.5907079646017706e-05,
369
+ "loss": 0.2337,
370
+ "step": 470
371
+ },
372
+ {
373
+ "epoch": 0.5194805194805194,
374
+ "grad_norm": 1.6328125,
375
+ "learning_rate": 4.579646017699115e-05,
376
+ "loss": 0.234,
377
+ "step": 480
378
+ },
379
+ {
380
+ "epoch": 0.5303030303030303,
381
+ "grad_norm": 2.171875,
382
+ "learning_rate": 4.56858407079646e-05,
383
+ "loss": 0.226,
384
+ "step": 490
385
+ },
386
+ {
387
+ "epoch": 0.5411255411255411,
388
+ "grad_norm": 1.75,
389
+ "learning_rate": 4.5575221238938055e-05,
390
+ "loss": 0.2318,
391
+ "step": 500
392
+ },
393
+ {
394
+ "epoch": 0.5411255411255411,
395
+ "eval_loss": 0.2462214082479477,
396
+ "eval_runtime": 14.0685,
397
+ "eval_samples_per_second": 35.541,
398
+ "eval_steps_per_second": 0.284,
399
+ "step": 500
400
+ },
401
+ {
402
+ "epoch": 0.551948051948052,
403
+ "grad_norm": 1.6953125,
404
+ "learning_rate": 4.5464601769911507e-05,
405
+ "loss": 0.2078,
406
+ "step": 510
407
+ },
408
+ {
409
+ "epoch": 0.5627705627705628,
410
+ "grad_norm": 1.6640625,
411
+ "learning_rate": 4.535398230088496e-05,
412
+ "loss": 0.2114,
413
+ "step": 520
414
+ },
415
+ {
416
+ "epoch": 0.5735930735930735,
417
+ "grad_norm": 1.46875,
418
+ "learning_rate": 4.524336283185841e-05,
419
+ "loss": 0.21,
420
+ "step": 530
421
+ },
422
+ {
423
+ "epoch": 0.5844155844155844,
424
+ "grad_norm": 1.46875,
425
+ "learning_rate": 4.5132743362831855e-05,
426
+ "loss": 0.2178,
427
+ "step": 540
428
+ },
429
+ {
430
+ "epoch": 0.5952380952380952,
431
+ "grad_norm": 1.65625,
432
+ "learning_rate": 4.5022123893805314e-05,
433
+ "loss": 0.2263,
434
+ "step": 550
435
+ },
436
+ {
437
+ "epoch": 0.6060606060606061,
438
+ "grad_norm": 1.8671875,
439
+ "learning_rate": 4.491150442477876e-05,
440
+ "loss": 0.2351,
441
+ "step": 560
442
+ },
443
+ {
444
+ "epoch": 0.6168831168831169,
445
+ "grad_norm": 1.25,
446
+ "learning_rate": 4.480088495575222e-05,
447
+ "loss": 0.2111,
448
+ "step": 570
449
+ },
450
+ {
451
+ "epoch": 0.6277056277056277,
452
+ "grad_norm": 1.578125,
453
+ "learning_rate": 4.469026548672566e-05,
454
+ "loss": 0.2109,
455
+ "step": 580
456
+ },
457
+ {
458
+ "epoch": 0.6385281385281385,
459
+ "grad_norm": 1.515625,
460
+ "learning_rate": 4.4579646017699114e-05,
461
+ "loss": 0.211,
462
+ "step": 590
463
+ },
464
+ {
465
+ "epoch": 0.6493506493506493,
466
+ "grad_norm": 1.3359375,
467
+ "learning_rate": 4.446902654867257e-05,
468
+ "loss": 0.2237,
469
+ "step": 600
470
+ },
471
+ {
472
+ "epoch": 0.6493506493506493,
473
+ "eval_loss": 0.22987733781337738,
474
+ "eval_runtime": 14.0366,
475
+ "eval_samples_per_second": 35.621,
476
+ "eval_steps_per_second": 0.285,
477
+ "step": 600
478
+ },
479
+ {
480
+ "epoch": 0.6601731601731602,
481
+ "grad_norm": 1.3828125,
482
+ "learning_rate": 4.435840707964602e-05,
483
+ "loss": 0.2215,
484
+ "step": 610
485
+ },
486
+ {
487
+ "epoch": 0.670995670995671,
488
+ "grad_norm": 1.7578125,
489
+ "learning_rate": 4.4247787610619477e-05,
490
+ "loss": 0.2084,
491
+ "step": 620
492
+ },
493
+ {
494
+ "epoch": 0.6818181818181818,
495
+ "grad_norm": 1.0859375,
496
+ "learning_rate": 4.413716814159292e-05,
497
+ "loss": 0.2222,
498
+ "step": 630
499
+ },
500
+ {
501
+ "epoch": 0.6926406926406926,
502
+ "grad_norm": 1.1875,
503
+ "learning_rate": 4.4026548672566373e-05,
504
+ "loss": 0.1977,
505
+ "step": 640
506
+ },
507
+ {
508
+ "epoch": 0.7034632034632035,
509
+ "grad_norm": 1.953125,
510
+ "learning_rate": 4.3915929203539825e-05,
511
+ "loss": 0.1945,
512
+ "step": 650
513
+ },
514
+ {
515
+ "epoch": 0.7142857142857143,
516
+ "grad_norm": 1.6484375,
517
+ "learning_rate": 4.380530973451328e-05,
518
+ "loss": 0.2288,
519
+ "step": 660
520
+ },
521
+ {
522
+ "epoch": 0.7251082251082251,
523
+ "grad_norm": 1.3828125,
524
+ "learning_rate": 4.369469026548673e-05,
525
+ "loss": 0.219,
526
+ "step": 670
527
+ },
528
+ {
529
+ "epoch": 0.7359307359307359,
530
+ "grad_norm": 1.6796875,
531
+ "learning_rate": 4.358407079646018e-05,
532
+ "loss": 0.2072,
533
+ "step": 680
534
+ },
535
+ {
536
+ "epoch": 0.7467532467532467,
537
+ "grad_norm": 1.3984375,
538
+ "learning_rate": 4.3473451327433626e-05,
539
+ "loss": 0.2161,
540
+ "step": 690
541
+ },
542
+ {
543
+ "epoch": 0.7575757575757576,
544
+ "grad_norm": 1.3671875,
545
+ "learning_rate": 4.3362831858407084e-05,
546
+ "loss": 0.2288,
547
+ "step": 700
548
+ },
549
+ {
550
+ "epoch": 0.7575757575757576,
551
+ "eval_loss": 0.22867494821548462,
552
+ "eval_runtime": 13.9772,
553
+ "eval_samples_per_second": 35.772,
554
+ "eval_steps_per_second": 0.286,
555
+ "step": 700
556
+ },
557
+ {
558
+ "epoch": 0.7683982683982684,
559
+ "grad_norm": 1.921875,
560
+ "learning_rate": 4.325221238938053e-05,
561
+ "loss": 0.1962,
562
+ "step": 710
563
+ },
564
+ {
565
+ "epoch": 0.7792207792207793,
566
+ "grad_norm": 1.5625,
567
+ "learning_rate": 4.314159292035399e-05,
568
+ "loss": 0.2086,
569
+ "step": 720
570
+ },
571
+ {
572
+ "epoch": 0.79004329004329,
573
+ "grad_norm": 1.2734375,
574
+ "learning_rate": 4.303097345132743e-05,
575
+ "loss": 0.1968,
576
+ "step": 730
577
+ },
578
+ {
579
+ "epoch": 0.8008658008658008,
580
+ "grad_norm": 1.6328125,
581
+ "learning_rate": 4.2920353982300885e-05,
582
+ "loss": 0.2114,
583
+ "step": 740
584
+ },
585
+ {
586
+ "epoch": 0.8116883116883117,
587
+ "grad_norm": 1.8671875,
588
+ "learning_rate": 4.280973451327434e-05,
589
+ "loss": 0.2008,
590
+ "step": 750
591
+ },
592
+ {
593
+ "epoch": 0.8225108225108225,
594
+ "grad_norm": 1.9296875,
595
+ "learning_rate": 4.269911504424779e-05,
596
+ "loss": 0.206,
597
+ "step": 760
598
+ },
599
+ {
600
+ "epoch": 0.8333333333333334,
601
+ "grad_norm": 1.484375,
602
+ "learning_rate": 4.258849557522124e-05,
603
+ "loss": 0.2091,
604
+ "step": 770
605
+ },
606
+ {
607
+ "epoch": 0.8441558441558441,
608
+ "grad_norm": 1.625,
609
+ "learning_rate": 4.247787610619469e-05,
610
+ "loss": 0.2137,
611
+ "step": 780
612
+ },
613
+ {
614
+ "epoch": 0.854978354978355,
615
+ "grad_norm": 1.78125,
616
+ "learning_rate": 4.2367256637168144e-05,
617
+ "loss": 0.2205,
618
+ "step": 790
619
+ },
620
+ {
621
+ "epoch": 0.8658008658008658,
622
+ "grad_norm": 1.0859375,
623
+ "learning_rate": 4.2256637168141596e-05,
624
+ "loss": 0.2096,
625
+ "step": 800
626
+ },
627
+ {
628
+ "epoch": 0.8658008658008658,
629
+ "eval_loss": 0.2200011909008026,
630
+ "eval_runtime": 13.8226,
631
+ "eval_samples_per_second": 36.173,
632
+ "eval_steps_per_second": 0.289,
633
+ "step": 800
634
+ },
635
+ {
636
+ "epoch": 0.8766233766233766,
637
+ "grad_norm": 1.5234375,
638
+ "learning_rate": 4.214601769911505e-05,
639
+ "loss": 0.2114,
640
+ "step": 810
641
+ },
642
+ {
643
+ "epoch": 0.8874458874458875,
644
+ "grad_norm": 1.84375,
645
+ "learning_rate": 4.20353982300885e-05,
646
+ "loss": 0.1894,
647
+ "step": 820
648
+ },
649
+ {
650
+ "epoch": 0.8982683982683982,
651
+ "grad_norm": 1.6484375,
652
+ "learning_rate": 4.192477876106195e-05,
653
+ "loss": 0.2055,
654
+ "step": 830
655
+ },
656
+ {
657
+ "epoch": 0.9090909090909091,
658
+ "grad_norm": 1.3359375,
659
+ "learning_rate": 4.1814159292035396e-05,
660
+ "loss": 0.1917,
661
+ "step": 840
662
+ },
663
+ {
664
+ "epoch": 0.9199134199134199,
665
+ "grad_norm": 1.1171875,
666
+ "learning_rate": 4.1703539823008855e-05,
667
+ "loss": 0.1775,
668
+ "step": 850
669
+ },
670
+ {
671
+ "epoch": 0.9307359307359307,
672
+ "grad_norm": 0.91796875,
673
+ "learning_rate": 4.15929203539823e-05,
674
+ "loss": 0.1924,
675
+ "step": 860
676
+ },
677
+ {
678
+ "epoch": 0.9415584415584416,
679
+ "grad_norm": 2.015625,
680
+ "learning_rate": 4.148230088495575e-05,
681
+ "loss": 0.2056,
682
+ "step": 870
683
+ },
684
+ {
685
+ "epoch": 0.9523809523809523,
686
+ "grad_norm": 1.6015625,
687
+ "learning_rate": 4.1371681415929203e-05,
688
+ "loss": 0.1973,
689
+ "step": 880
690
+ },
691
+ {
692
+ "epoch": 0.9632034632034632,
693
+ "grad_norm": 1.5703125,
694
+ "learning_rate": 4.1261061946902655e-05,
695
+ "loss": 0.2009,
696
+ "step": 890
697
+ },
698
+ {
699
+ "epoch": 0.974025974025974,
700
+ "grad_norm": 1.5546875,
701
+ "learning_rate": 4.115044247787611e-05,
702
+ "loss": 0.2166,
703
+ "step": 900
704
+ },
705
+ {
706
+ "epoch": 0.974025974025974,
707
+ "eval_loss": 0.2135138362646103,
708
+ "eval_runtime": 13.5879,
709
+ "eval_samples_per_second": 36.798,
710
+ "eval_steps_per_second": 0.294,
711
+ "step": 900
712
+ },
713
+ {
714
+ "epoch": 0.9848484848484849,
715
+ "grad_norm": 1.2109375,
716
+ "learning_rate": 4.103982300884956e-05,
717
+ "loss": 0.1889,
718
+ "step": 910
719
+ },
720
+ {
721
+ "epoch": 0.9956709956709957,
722
+ "grad_norm": 4.125,
723
+ "learning_rate": 4.092920353982301e-05,
724
+ "loss": 0.1955,
725
+ "step": 920
726
+ },
727
+ {
728
+ "epoch": 1.0064935064935066,
729
+ "grad_norm": 1.625,
730
+ "learning_rate": 4.081858407079646e-05,
731
+ "loss": 0.1949,
732
+ "step": 930
733
+ },
734
+ {
735
+ "epoch": 1.0173160173160174,
736
+ "grad_norm": 1.578125,
737
+ "learning_rate": 4.0707964601769914e-05,
738
+ "loss": 0.171,
739
+ "step": 940
740
+ },
741
+ {
742
+ "epoch": 1.0281385281385282,
743
+ "grad_norm": 1.6484375,
744
+ "learning_rate": 4.0597345132743366e-05,
745
+ "loss": 0.193,
746
+ "step": 950
747
+ },
748
+ {
749
+ "epoch": 1.0389610389610389,
750
+ "grad_norm": 1.5390625,
751
+ "learning_rate": 4.048672566371682e-05,
752
+ "loss": 0.1908,
753
+ "step": 960
754
+ },
755
+ {
756
+ "epoch": 1.0497835497835497,
757
+ "grad_norm": 1.40625,
758
+ "learning_rate": 4.037610619469026e-05,
759
+ "loss": 0.1824,
760
+ "step": 970
761
+ },
762
+ {
763
+ "epoch": 1.0606060606060606,
764
+ "grad_norm": 1.4921875,
765
+ "learning_rate": 4.026548672566372e-05,
766
+ "loss": 0.1822,
767
+ "step": 980
768
+ },
769
+ {
770
+ "epoch": 1.0714285714285714,
771
+ "grad_norm": 1.8203125,
772
+ "learning_rate": 4.015486725663717e-05,
773
+ "loss": 0.198,
774
+ "step": 990
775
+ },
776
+ {
777
+ "epoch": 1.0822510822510822,
778
+ "grad_norm": 1.2578125,
779
+ "learning_rate": 4.0044247787610625e-05,
780
+ "loss": 0.1972,
781
+ "step": 1000
782
+ },
783
+ {
784
+ "epoch": 1.0822510822510822,
785
+ "eval_loss": 0.20468579232692719,
786
+ "eval_runtime": 13.8662,
787
+ "eval_samples_per_second": 36.059,
788
+ "eval_steps_per_second": 0.288,
789
+ "step": 1000
790
+ },
791
+ {
792
+ "epoch": 1.093073593073593,
793
+ "grad_norm": 1.8515625,
794
+ "learning_rate": 3.993362831858407e-05,
795
+ "loss": 0.1843,
796
+ "step": 1010
797
+ },
798
+ {
799
+ "epoch": 1.103896103896104,
800
+ "grad_norm": 1.65625,
801
+ "learning_rate": 3.982300884955752e-05,
802
+ "loss": 0.1706,
803
+ "step": 1020
804
+ },
805
+ {
806
+ "epoch": 1.1147186147186148,
807
+ "grad_norm": 1.4296875,
808
+ "learning_rate": 3.9712389380530974e-05,
809
+ "loss": 0.1684,
810
+ "step": 1030
811
+ },
812
+ {
813
+ "epoch": 1.1255411255411256,
814
+ "grad_norm": 2.03125,
815
+ "learning_rate": 3.9601769911504426e-05,
816
+ "loss": 0.1877,
817
+ "step": 1040
818
+ },
819
+ {
820
+ "epoch": 1.1363636363636362,
821
+ "grad_norm": 1.5078125,
822
+ "learning_rate": 3.949115044247788e-05,
823
+ "loss": 0.1948,
824
+ "step": 1050
825
+ },
826
+ {
827
+ "epoch": 1.1471861471861473,
828
+ "grad_norm": 1.8046875,
829
+ "learning_rate": 3.938053097345133e-05,
830
+ "loss": 0.1846,
831
+ "step": 1060
832
+ },
833
+ {
834
+ "epoch": 1.158008658008658,
835
+ "grad_norm": 1.7421875,
836
+ "learning_rate": 3.926991150442478e-05,
837
+ "loss": 0.1812,
838
+ "step": 1070
839
+ },
840
+ {
841
+ "epoch": 1.1688311688311688,
842
+ "grad_norm": 1.296875,
843
+ "learning_rate": 3.915929203539823e-05,
844
+ "loss": 0.1773,
845
+ "step": 1080
846
+ },
847
+ {
848
+ "epoch": 1.1796536796536796,
849
+ "grad_norm": 2.078125,
850
+ "learning_rate": 3.9048672566371685e-05,
851
+ "loss": 0.1716,
852
+ "step": 1090
853
+ },
854
+ {
855
+ "epoch": 1.1904761904761905,
856
+ "grad_norm": 1.484375,
857
+ "learning_rate": 3.893805309734514e-05,
858
+ "loss": 0.1912,
859
+ "step": 1100
860
+ },
861
+ {
862
+ "epoch": 1.1904761904761905,
863
+ "eval_loss": 0.21200305223464966,
864
+ "eval_runtime": 18.2843,
865
+ "eval_samples_per_second": 27.346,
866
+ "eval_steps_per_second": 0.219,
867
+ "step": 1100
868
+ },
869
+ {
870
+ "epoch": 1.2012987012987013,
871
+ "grad_norm": 1.7265625,
872
+ "learning_rate": 3.882743362831859e-05,
873
+ "loss": 0.1739,
874
+ "step": 1110
875
+ },
876
+ {
877
+ "epoch": 1.2121212121212122,
878
+ "grad_norm": 1.7109375,
879
+ "learning_rate": 3.8716814159292034e-05,
880
+ "loss": 0.1747,
881
+ "step": 1120
882
+ },
883
+ {
884
+ "epoch": 1.222943722943723,
885
+ "grad_norm": 1.46875,
886
+ "learning_rate": 3.860619469026549e-05,
887
+ "loss": 0.1631,
888
+ "step": 1130
889
+ },
890
+ {
891
+ "epoch": 1.2337662337662338,
892
+ "grad_norm": 1.609375,
893
+ "learning_rate": 3.849557522123894e-05,
894
+ "loss": 0.1722,
895
+ "step": 1140
896
+ },
897
+ {
898
+ "epoch": 1.2445887445887447,
899
+ "grad_norm": 2.015625,
900
+ "learning_rate": 3.8384955752212396e-05,
901
+ "loss": 0.1911,
902
+ "step": 1150
903
+ },
904
+ {
905
+ "epoch": 1.2554112554112553,
906
+ "grad_norm": 1.3828125,
907
+ "learning_rate": 3.827433628318584e-05,
908
+ "loss": 0.1557,
909
+ "step": 1160
910
+ },
911
+ {
912
+ "epoch": 1.2662337662337662,
913
+ "grad_norm": 1.9140625,
914
+ "learning_rate": 3.816371681415929e-05,
915
+ "loss": 0.1621,
916
+ "step": 1170
917
+ },
918
+ {
919
+ "epoch": 1.277056277056277,
920
+ "grad_norm": 1.6171875,
921
+ "learning_rate": 3.8053097345132744e-05,
922
+ "loss": 0.1694,
923
+ "step": 1180
924
+ },
925
+ {
926
+ "epoch": 1.2878787878787878,
927
+ "grad_norm": 3.1875,
928
+ "learning_rate": 3.7942477876106196e-05,
929
+ "loss": 0.1774,
930
+ "step": 1190
931
+ },
932
+ {
933
+ "epoch": 1.2987012987012987,
934
+ "grad_norm": 1.1015625,
935
+ "learning_rate": 3.783185840707965e-05,
936
+ "loss": 0.181,
937
+ "step": 1200
938
+ },
939
+ {
940
+ "epoch": 1.2987012987012987,
941
+ "eval_loss": 0.20562683045864105,
942
+ "eval_runtime": 14.2589,
943
+ "eval_samples_per_second": 35.066,
944
+ "eval_steps_per_second": 0.281,
945
+ "step": 1200
946
+ },
947
+ {
948
+ "epoch": 1.3095238095238095,
949
+ "grad_norm": 2.09375,
950
+ "learning_rate": 3.77212389380531e-05,
951
+ "loss": 0.1686,
952
+ "step": 1210
953
+ },
954
+ {
955
+ "epoch": 1.3203463203463204,
956
+ "grad_norm": 1.2734375,
957
+ "learning_rate": 3.7610619469026545e-05,
958
+ "loss": 0.1714,
959
+ "step": 1220
960
+ },
961
+ {
962
+ "epoch": 1.3311688311688312,
963
+ "grad_norm": 1.109375,
964
+ "learning_rate": 3.7500000000000003e-05,
965
+ "loss": 0.1869,
966
+ "step": 1230
967
+ },
968
+ {
969
+ "epoch": 1.341991341991342,
970
+ "grad_norm": 1.9296875,
971
+ "learning_rate": 3.7389380530973455e-05,
972
+ "loss": 0.1684,
973
+ "step": 1240
974
+ },
975
+ {
976
+ "epoch": 1.3528138528138527,
977
+ "grad_norm": 1.6328125,
978
+ "learning_rate": 3.727876106194691e-05,
979
+ "loss": 0.1747,
980
+ "step": 1250
981
+ },
982
+ {
983
+ "epoch": 1.3636363636363638,
984
+ "grad_norm": 1.0859375,
985
+ "learning_rate": 3.716814159292036e-05,
986
+ "loss": 0.1637,
987
+ "step": 1260
988
+ },
989
+ {
990
+ "epoch": 1.3744588744588744,
991
+ "grad_norm": 1.390625,
992
+ "learning_rate": 3.7057522123893804e-05,
993
+ "loss": 0.1684,
994
+ "step": 1270
995
+ },
996
+ {
997
+ "epoch": 1.3852813852813852,
998
+ "grad_norm": 1.453125,
999
+ "learning_rate": 3.694690265486726e-05,
1000
+ "loss": 0.1849,
1001
+ "step": 1280
1002
+ },
1003
+ {
1004
+ "epoch": 1.396103896103896,
1005
+ "grad_norm": 1.3046875,
1006
+ "learning_rate": 3.683628318584071e-05,
1007
+ "loss": 0.182,
1008
+ "step": 1290
1009
+ },
1010
+ {
1011
+ "epoch": 1.406926406926407,
1012
+ "grad_norm": 1.640625,
1013
+ "learning_rate": 3.672566371681416e-05,
1014
+ "loss": 0.1756,
1015
+ "step": 1300
1016
+ },
1017
+ {
1018
+ "epoch": 1.406926406926407,
1019
+ "eval_loss": 0.1982562243938446,
1020
+ "eval_runtime": 13.8702,
1021
+ "eval_samples_per_second": 36.049,
1022
+ "eval_steps_per_second": 0.288,
1023
+ "step": 1300
1024
+ },
1025
+ {
1026
+ "epoch": 1.4177489177489178,
1027
+ "grad_norm": 1.9375,
1028
+ "learning_rate": 3.661504424778761e-05,
1029
+ "loss": 0.1787,
1030
+ "step": 1310
1031
+ },
1032
+ {
1033
+ "epoch": 1.4285714285714286,
1034
+ "grad_norm": 1.453125,
1035
+ "learning_rate": 3.650442477876106e-05,
1036
+ "loss": 0.176,
1037
+ "step": 1320
1038
+ },
1039
+ {
1040
+ "epoch": 1.4393939393939394,
1041
+ "grad_norm": 1.6796875,
1042
+ "learning_rate": 3.6393805309734515e-05,
1043
+ "loss": 0.1639,
1044
+ "step": 1330
1045
+ },
1046
+ {
1047
+ "epoch": 1.4502164502164503,
1048
+ "grad_norm": 1.7421875,
1049
+ "learning_rate": 3.628318584070797e-05,
1050
+ "loss": 0.1786,
1051
+ "step": 1340
1052
+ },
1053
+ {
1054
+ "epoch": 1.4610389610389611,
1055
+ "grad_norm": 1.4609375,
1056
+ "learning_rate": 3.617256637168142e-05,
1057
+ "loss": 0.1705,
1058
+ "step": 1350
1059
+ },
1060
+ {
1061
+ "epoch": 1.4718614718614718,
1062
+ "grad_norm": 2.03125,
1063
+ "learning_rate": 3.606194690265487e-05,
1064
+ "loss": 0.1619,
1065
+ "step": 1360
1066
+ },
1067
+ {
1068
+ "epoch": 1.4826839826839826,
1069
+ "grad_norm": 1.609375,
1070
+ "learning_rate": 3.5951327433628315e-05,
1071
+ "loss": 0.1931,
1072
+ "step": 1370
1073
+ },
1074
+ {
1075
+ "epoch": 1.4935064935064934,
1076
+ "grad_norm": 1.1484375,
1077
+ "learning_rate": 3.5840707964601774e-05,
1078
+ "loss": 0.1693,
1079
+ "step": 1380
1080
+ },
1081
+ {
1082
+ "epoch": 1.5043290043290043,
1083
+ "grad_norm": 1.6328125,
1084
+ "learning_rate": 3.573008849557522e-05,
1085
+ "loss": 0.1718,
1086
+ "step": 1390
1087
+ },
1088
+ {
1089
+ "epoch": 1.5151515151515151,
1090
+ "grad_norm": 1.375,
1091
+ "learning_rate": 3.561946902654867e-05,
1092
+ "loss": 0.181,
1093
+ "step": 1400
1094
+ },
1095
+ {
1096
+ "epoch": 1.5151515151515151,
1097
+ "eval_loss": 0.21516482532024384,
1098
+ "eval_runtime": 13.8304,
1099
+ "eval_samples_per_second": 36.152,
1100
+ "eval_steps_per_second": 0.289,
1101
+ "step": 1400
1102
+ },
1103
+ {
1104
+ "epoch": 1.525974025974026,
1105
+ "grad_norm": 1.078125,
1106
+ "learning_rate": 3.550884955752213e-05,
1107
+ "loss": 0.1718,
1108
+ "step": 1410
1109
+ },
1110
+ {
1111
+ "epoch": 1.5367965367965368,
1112
+ "grad_norm": 1.2265625,
1113
+ "learning_rate": 3.5398230088495574e-05,
1114
+ "loss": 0.1546,
1115
+ "step": 1420
1116
+ },
1117
+ {
1118
+ "epoch": 1.5476190476190477,
1119
+ "grad_norm": 1.3671875,
1120
+ "learning_rate": 3.528761061946903e-05,
1121
+ "loss": 0.1706,
1122
+ "step": 1430
1123
+ },
1124
+ {
1125
+ "epoch": 1.5584415584415585,
1126
+ "grad_norm": 0.91796875,
1127
+ "learning_rate": 3.517699115044248e-05,
1128
+ "loss": 0.1608,
1129
+ "step": 1440
1130
+ },
1131
+ {
1132
+ "epoch": 1.5692640692640691,
1133
+ "grad_norm": 1.8046875,
1134
+ "learning_rate": 3.506637168141593e-05,
1135
+ "loss": 0.1707,
1136
+ "step": 1450
1137
+ },
1138
+ {
1139
+ "epoch": 1.5800865800865802,
1140
+ "grad_norm": 1.6484375,
1141
+ "learning_rate": 3.495575221238938e-05,
1142
+ "loss": 0.1776,
1143
+ "step": 1460
1144
+ },
1145
+ {
1146
+ "epoch": 1.5909090909090908,
1147
+ "grad_norm": 1.1640625,
1148
+ "learning_rate": 3.4845132743362834e-05,
1149
+ "loss": 0.1546,
1150
+ "step": 1470
1151
+ },
1152
+ {
1153
+ "epoch": 1.601731601731602,
1154
+ "grad_norm": 1.703125,
1155
+ "learning_rate": 3.4734513274336285e-05,
1156
+ "loss": 0.1669,
1157
+ "step": 1480
1158
+ },
1159
+ {
1160
+ "epoch": 1.6125541125541125,
1161
+ "grad_norm": 1.578125,
1162
+ "learning_rate": 3.462389380530974e-05,
1163
+ "loss": 0.1771,
1164
+ "step": 1490
1165
+ },
1166
+ {
1167
+ "epoch": 1.6233766233766234,
1168
+ "grad_norm": 1.6953125,
1169
+ "learning_rate": 3.451327433628319e-05,
1170
+ "loss": 0.1834,
1171
+ "step": 1500
1172
+ },
1173
+ {
1174
+ "epoch": 1.6233766233766234,
1175
+ "eval_loss": 0.19372014701366425,
1176
+ "eval_runtime": 13.9577,
1177
+ "eval_samples_per_second": 35.823,
1178
+ "eval_steps_per_second": 0.287,
1179
+ "step": 1500
1180
+ },
1181
+ {
1182
+ "epoch": 1.6341991341991342,
1183
+ "grad_norm": 3.09375,
1184
+ "learning_rate": 3.440265486725664e-05,
1185
+ "loss": 0.1717,
1186
+ "step": 1510
1187
+ },
1188
+ {
1189
+ "epoch": 1.645021645021645,
1190
+ "grad_norm": 2.234375,
1191
+ "learning_rate": 3.4292035398230086e-05,
1192
+ "loss": 0.1786,
1193
+ "step": 1520
1194
+ },
1195
+ {
1196
+ "epoch": 1.655844155844156,
1197
+ "grad_norm": 1.3046875,
1198
+ "learning_rate": 3.4181415929203544e-05,
1199
+ "loss": 0.1683,
1200
+ "step": 1530
1201
+ },
1202
+ {
1203
+ "epoch": 1.6666666666666665,
1204
+ "grad_norm": 1.5859375,
1205
+ "learning_rate": 3.407079646017699e-05,
1206
+ "loss": 0.1661,
1207
+ "step": 1540
1208
+ },
1209
+ {
1210
+ "epoch": 1.6774891774891776,
1211
+ "grad_norm": 2.109375,
1212
+ "learning_rate": 3.396017699115044e-05,
1213
+ "loss": 0.165,
1214
+ "step": 1550
1215
+ },
1216
+ {
1217
+ "epoch": 1.6883116883116882,
1218
+ "grad_norm": 2.203125,
1219
+ "learning_rate": 3.38495575221239e-05,
1220
+ "loss": 0.1542,
1221
+ "step": 1560
1222
+ },
1223
+ {
1224
+ "epoch": 1.6991341991341993,
1225
+ "grad_norm": 1.609375,
1226
+ "learning_rate": 3.3738938053097345e-05,
1227
+ "loss": 0.1689,
1228
+ "step": 1570
1229
+ },
1230
+ {
1231
+ "epoch": 1.70995670995671,
1232
+ "grad_norm": 1.8203125,
1233
+ "learning_rate": 3.3628318584070804e-05,
1234
+ "loss": 0.1721,
1235
+ "step": 1580
1236
+ },
1237
+ {
1238
+ "epoch": 1.7207792207792207,
1239
+ "grad_norm": 1.3515625,
1240
+ "learning_rate": 3.351769911504425e-05,
1241
+ "loss": 0.1731,
1242
+ "step": 1590
1243
+ },
1244
+ {
1245
+ "epoch": 1.7316017316017316,
1246
+ "grad_norm": 1.1796875,
1247
+ "learning_rate": 3.34070796460177e-05,
1248
+ "loss": 0.1761,
1249
+ "step": 1600
1250
+ },
1251
+ {
1252
+ "epoch": 1.7316017316017316,
1253
+ "eval_loss": 0.18740878999233246,
1254
+ "eval_runtime": 13.8261,
1255
+ "eval_samples_per_second": 36.164,
1256
+ "eval_steps_per_second": 0.289,
1257
+ "step": 1600
1258
+ },
1259
+ {
1260
+ "epoch": 1.7424242424242424,
1261
+ "grad_norm": 1.6953125,
1262
+ "learning_rate": 3.329646017699115e-05,
1263
+ "loss": 0.1666,
1264
+ "step": 1610
1265
+ },
1266
+ {
1267
+ "epoch": 1.7532467532467533,
1268
+ "grad_norm": 1.0625,
1269
+ "learning_rate": 3.3185840707964604e-05,
1270
+ "loss": 0.1578,
1271
+ "step": 1620
1272
+ },
1273
+ {
1274
+ "epoch": 1.7640692640692641,
1275
+ "grad_norm": 1.4296875,
1276
+ "learning_rate": 3.3075221238938056e-05,
1277
+ "loss": 0.1613,
1278
+ "step": 1630
1279
+ },
1280
+ {
1281
+ "epoch": 1.774891774891775,
1282
+ "grad_norm": 1.1484375,
1283
+ "learning_rate": 3.296460176991151e-05,
1284
+ "loss": 0.1673,
1285
+ "step": 1640
1286
+ },
1287
+ {
1288
+ "epoch": 1.7857142857142856,
1289
+ "grad_norm": 1.1640625,
1290
+ "learning_rate": 3.285398230088495e-05,
1291
+ "loss": 0.1868,
1292
+ "step": 1650
1293
+ },
1294
+ {
1295
+ "epoch": 1.7965367965367967,
1296
+ "grad_norm": 1.3046875,
1297
+ "learning_rate": 3.274336283185841e-05,
1298
+ "loss": 0.1648,
1299
+ "step": 1660
1300
+ },
1301
+ {
1302
+ "epoch": 1.8073593073593073,
1303
+ "grad_norm": 1.609375,
1304
+ "learning_rate": 3.2632743362831856e-05,
1305
+ "loss": 0.1768,
1306
+ "step": 1670
1307
+ },
1308
+ {
1309
+ "epoch": 1.8181818181818183,
1310
+ "grad_norm": 1.7578125,
1311
+ "learning_rate": 3.2522123893805315e-05,
1312
+ "loss": 0.1665,
1313
+ "step": 1680
1314
+ },
1315
+ {
1316
+ "epoch": 1.829004329004329,
1317
+ "grad_norm": 1.484375,
1318
+ "learning_rate": 3.241150442477876e-05,
1319
+ "loss": 0.153,
1320
+ "step": 1690
1321
+ },
1322
+ {
1323
+ "epoch": 1.8398268398268398,
1324
+ "grad_norm": 1.5703125,
1325
+ "learning_rate": 3.230088495575221e-05,
1326
+ "loss": 0.1525,
1327
+ "step": 1700
1328
+ },
1329
+ {
1330
+ "epoch": 1.8398268398268398,
1331
+ "eval_loss": 0.1896573305130005,
1332
+ "eval_runtime": 13.7312,
1333
+ "eval_samples_per_second": 36.413,
1334
+ "eval_steps_per_second": 0.291,
1335
+ "step": 1700
1336
+ },
1337
+ {
1338
+ "epoch": 1.8506493506493507,
1339
+ "grad_norm": 2.4375,
1340
+ "learning_rate": 3.2190265486725664e-05,
1341
+ "loss": 0.1614,
1342
+ "step": 1710
1343
+ },
1344
+ {
1345
+ "epoch": 1.8614718614718615,
1346
+ "grad_norm": 1.0625,
1347
+ "learning_rate": 3.2079646017699115e-05,
1348
+ "loss": 0.186,
1349
+ "step": 1720
1350
+ },
1351
+ {
1352
+ "epoch": 1.8722943722943723,
1353
+ "grad_norm": 2.046875,
1354
+ "learning_rate": 3.196902654867257e-05,
1355
+ "loss": 0.1611,
1356
+ "step": 1730
1357
+ },
1358
+ {
1359
+ "epoch": 1.883116883116883,
1360
+ "grad_norm": 1.828125,
1361
+ "learning_rate": 3.185840707964602e-05,
1362
+ "loss": 0.1767,
1363
+ "step": 1740
1364
+ },
1365
+ {
1366
+ "epoch": 1.893939393939394,
1367
+ "grad_norm": 1.484375,
1368
+ "learning_rate": 3.174778761061947e-05,
1369
+ "loss": 0.1726,
1370
+ "step": 1750
1371
+ },
1372
+ {
1373
+ "epoch": 1.9047619047619047,
1374
+ "grad_norm": 1.640625,
1375
+ "learning_rate": 3.163716814159292e-05,
1376
+ "loss": 0.1444,
1377
+ "step": 1760
1378
+ },
1379
+ {
1380
+ "epoch": 1.9155844155844157,
1381
+ "grad_norm": 2.0625,
1382
+ "learning_rate": 3.1526548672566374e-05,
1383
+ "loss": 0.1609,
1384
+ "step": 1770
1385
+ },
1386
+ {
1387
+ "epoch": 1.9264069264069263,
1388
+ "grad_norm": 1.5078125,
1389
+ "learning_rate": 3.1415929203539826e-05,
1390
+ "loss": 0.1694,
1391
+ "step": 1780
1392
+ },
1393
+ {
1394
+ "epoch": 1.9372294372294372,
1395
+ "grad_norm": 1.3984375,
1396
+ "learning_rate": 3.130530973451328e-05,
1397
+ "loss": 0.1718,
1398
+ "step": 1790
1399
+ },
1400
+ {
1401
+ "epoch": 1.948051948051948,
1402
+ "grad_norm": 1.71875,
1403
+ "learning_rate": 3.119469026548672e-05,
1404
+ "loss": 0.1669,
1405
+ "step": 1800
1406
+ },
1407
+ {
1408
+ "epoch": 1.948051948051948,
1409
+ "eval_loss": 0.18646453320980072,
1410
+ "eval_runtime": 14.1377,
1411
+ "eval_samples_per_second": 35.366,
1412
+ "eval_steps_per_second": 0.283,
1413
+ "step": 1800
1414
+ },
1415
+ {
1416
+ "epoch": 1.9588744588744589,
1417
+ "grad_norm": 1.90625,
1418
+ "learning_rate": 3.108407079646018e-05,
1419
+ "loss": 0.1804,
1420
+ "step": 1810
1421
+ },
1422
+ {
1423
+ "epoch": 1.9696969696969697,
1424
+ "grad_norm": 1.609375,
1425
+ "learning_rate": 3.097345132743363e-05,
1426
+ "loss": 0.1594,
1427
+ "step": 1820
1428
+ },
1429
+ {
1430
+ "epoch": 1.9805194805194806,
1431
+ "grad_norm": 1.4921875,
1432
+ "learning_rate": 3.086283185840708e-05,
1433
+ "loss": 0.1542,
1434
+ "step": 1830
1435
+ },
1436
+ {
1437
+ "epoch": 1.9913419913419914,
1438
+ "grad_norm": 2.046875,
1439
+ "learning_rate": 3.075221238938053e-05,
1440
+ "loss": 0.1462,
1441
+ "step": 1840
1442
+ },
1443
+ {
1444
+ "epoch": 2.002164502164502,
1445
+ "grad_norm": 1.0703125,
1446
+ "learning_rate": 3.064159292035398e-05,
1447
+ "loss": 0.1446,
1448
+ "step": 1850
1449
+ },
1450
+ {
1451
+ "epoch": 2.012987012987013,
1452
+ "grad_norm": 1.578125,
1453
+ "learning_rate": 3.0530973451327434e-05,
1454
+ "loss": 0.148,
1455
+ "step": 1860
1456
+ },
1457
+ {
1458
+ "epoch": 2.0238095238095237,
1459
+ "grad_norm": 1.5546875,
1460
+ "learning_rate": 3.0420353982300886e-05,
1461
+ "loss": 0.1437,
1462
+ "step": 1870
1463
+ },
1464
+ {
1465
+ "epoch": 2.034632034632035,
1466
+ "grad_norm": 1.015625,
1467
+ "learning_rate": 3.030973451327434e-05,
1468
+ "loss": 0.1519,
1469
+ "step": 1880
1470
+ },
1471
+ {
1472
+ "epoch": 2.0454545454545454,
1473
+ "grad_norm": 1.203125,
1474
+ "learning_rate": 3.019911504424779e-05,
1475
+ "loss": 0.1573,
1476
+ "step": 1890
1477
+ },
1478
+ {
1479
+ "epoch": 2.0562770562770565,
1480
+ "grad_norm": 1.4296875,
1481
+ "learning_rate": 3.008849557522124e-05,
1482
+ "loss": 0.1319,
1483
+ "step": 1900
1484
+ },
1485
+ {
1486
+ "epoch": 2.0562770562770565,
1487
+ "eval_loss": 0.1948573887348175,
1488
+ "eval_runtime": 14.0173,
1489
+ "eval_samples_per_second": 35.67,
1490
+ "eval_steps_per_second": 0.285,
1491
+ "step": 1900
1492
+ },
1493
+ {
1494
+ "epoch": 2.067099567099567,
1495
+ "grad_norm": 1.3828125,
1496
+ "learning_rate": 2.997787610619469e-05,
1497
+ "loss": 0.1526,
1498
+ "step": 1910
1499
+ },
1500
+ {
1501
+ "epoch": 2.0779220779220777,
1502
+ "grad_norm": 2.03125,
1503
+ "learning_rate": 2.9867256637168145e-05,
1504
+ "loss": 0.1444,
1505
+ "step": 1920
1506
+ },
1507
+ {
1508
+ "epoch": 2.088744588744589,
1509
+ "grad_norm": 1.59375,
1510
+ "learning_rate": 2.9756637168141593e-05,
1511
+ "loss": 0.1437,
1512
+ "step": 1930
1513
+ },
1514
+ {
1515
+ "epoch": 2.0995670995670994,
1516
+ "grad_norm": 1.2890625,
1517
+ "learning_rate": 2.964601769911505e-05,
1518
+ "loss": 0.1447,
1519
+ "step": 1940
1520
+ },
1521
+ {
1522
+ "epoch": 2.1103896103896105,
1523
+ "grad_norm": 1.59375,
1524
+ "learning_rate": 2.9535398230088497e-05,
1525
+ "loss": 0.1597,
1526
+ "step": 1950
1527
+ },
1528
+ {
1529
+ "epoch": 2.121212121212121,
1530
+ "grad_norm": 1.5703125,
1531
+ "learning_rate": 2.942477876106195e-05,
1532
+ "loss": 0.1416,
1533
+ "step": 1960
1534
+ },
1535
+ {
1536
+ "epoch": 2.132034632034632,
1537
+ "grad_norm": 0.98828125,
1538
+ "learning_rate": 2.9314159292035397e-05,
1539
+ "loss": 0.1451,
1540
+ "step": 1970
1541
+ },
1542
+ {
1543
+ "epoch": 2.142857142857143,
1544
+ "grad_norm": 0.90625,
1545
+ "learning_rate": 2.9203539823008852e-05,
1546
+ "loss": 0.1385,
1547
+ "step": 1980
1548
+ },
1549
+ {
1550
+ "epoch": 2.153679653679654,
1551
+ "grad_norm": 1.515625,
1552
+ "learning_rate": 2.90929203539823e-05,
1553
+ "loss": 0.1527,
1554
+ "step": 1990
1555
+ },
1556
+ {
1557
+ "epoch": 2.1645021645021645,
1558
+ "grad_norm": 1.453125,
1559
+ "learning_rate": 2.8982300884955753e-05,
1560
+ "loss": 0.1472,
1561
+ "step": 2000
1562
+ },
1563
+ {
1564
+ "epoch": 2.1645021645021645,
1565
+ "eval_loss": 0.19946011900901794,
1566
+ "eval_runtime": 14.0627,
1567
+ "eval_samples_per_second": 35.555,
1568
+ "eval_steps_per_second": 0.284,
1569
+ "step": 2000
1570
+ },
1571
+ {
1572
+ "epoch": 2.175324675324675,
1573
+ "grad_norm": 0.87890625,
1574
+ "learning_rate": 2.88716814159292e-05,
1575
+ "loss": 0.126,
1576
+ "step": 2010
1577
+ },
1578
+ {
1579
+ "epoch": 2.186147186147186,
1580
+ "grad_norm": 1.4296875,
1581
+ "learning_rate": 2.8761061946902656e-05,
1582
+ "loss": 0.1473,
1583
+ "step": 2020
1584
+ },
1585
+ {
1586
+ "epoch": 2.196969696969697,
1587
+ "grad_norm": 1.21875,
1588
+ "learning_rate": 2.8650442477876105e-05,
1589
+ "loss": 0.1394,
1590
+ "step": 2030
1591
+ },
1592
+ {
1593
+ "epoch": 2.207792207792208,
1594
+ "grad_norm": 1.3359375,
1595
+ "learning_rate": 2.853982300884956e-05,
1596
+ "loss": 0.1382,
1597
+ "step": 2040
1598
+ },
1599
+ {
1600
+ "epoch": 2.2186147186147185,
1601
+ "grad_norm": 1.34375,
1602
+ "learning_rate": 2.8429203539823012e-05,
1603
+ "loss": 0.1289,
1604
+ "step": 2050
1605
+ },
1606
+ {
1607
+ "epoch": 2.2294372294372296,
1608
+ "grad_norm": 1.3828125,
1609
+ "learning_rate": 2.831858407079646e-05,
1610
+ "loss": 0.1518,
1611
+ "step": 2060
1612
+ },
1613
+ {
1614
+ "epoch": 2.24025974025974,
1615
+ "grad_norm": 2.265625,
1616
+ "learning_rate": 2.8207964601769915e-05,
1617
+ "loss": 0.1473,
1618
+ "step": 2070
1619
+ },
1620
+ {
1621
+ "epoch": 2.2510822510822512,
1622
+ "grad_norm": 1.3828125,
1623
+ "learning_rate": 2.8097345132743364e-05,
1624
+ "loss": 0.1191,
1625
+ "step": 2080
1626
+ },
1627
+ {
1628
+ "epoch": 2.261904761904762,
1629
+ "grad_norm": 1.203125,
1630
+ "learning_rate": 2.7986725663716816e-05,
1631
+ "loss": 0.1379,
1632
+ "step": 2090
1633
+ },
1634
+ {
1635
+ "epoch": 2.2727272727272725,
1636
+ "grad_norm": 1.7265625,
1637
+ "learning_rate": 2.7876106194690264e-05,
1638
+ "loss": 0.157,
1639
+ "step": 2100
1640
+ },
1641
+ {
1642
+ "epoch": 2.2727272727272725,
1643
+ "eval_loss": 0.19301347434520721,
1644
+ "eval_runtime": 13.8998,
1645
+ "eval_samples_per_second": 35.972,
1646
+ "eval_steps_per_second": 0.288,
1647
+ "step": 2100
1648
+ },
1649
+ {
1650
+ "epoch": 2.2835497835497836,
1651
+ "grad_norm": 1.21875,
1652
+ "learning_rate": 2.776548672566372e-05,
1653
+ "loss": 0.1494,
1654
+ "step": 2110
1655
+ },
1656
+ {
1657
+ "epoch": 2.2943722943722946,
1658
+ "grad_norm": 0.98046875,
1659
+ "learning_rate": 2.7654867256637168e-05,
1660
+ "loss": 0.1341,
1661
+ "step": 2120
1662
+ },
1663
+ {
1664
+ "epoch": 2.3051948051948052,
1665
+ "grad_norm": 1.7890625,
1666
+ "learning_rate": 2.7544247787610623e-05,
1667
+ "loss": 0.1517,
1668
+ "step": 2130
1669
+ },
1670
+ {
1671
+ "epoch": 2.316017316017316,
1672
+ "grad_norm": 1.453125,
1673
+ "learning_rate": 2.743362831858407e-05,
1674
+ "loss": 0.1403,
1675
+ "step": 2140
1676
+ },
1677
+ {
1678
+ "epoch": 2.326839826839827,
1679
+ "grad_norm": 1.3125,
1680
+ "learning_rate": 2.7323008849557523e-05,
1681
+ "loss": 0.1442,
1682
+ "step": 2150
1683
+ },
1684
+ {
1685
+ "epoch": 2.3376623376623376,
1686
+ "grad_norm": 2.03125,
1687
+ "learning_rate": 2.721238938053097e-05,
1688
+ "loss": 0.1426,
1689
+ "step": 2160
1690
+ },
1691
+ {
1692
+ "epoch": 2.3484848484848486,
1693
+ "grad_norm": 1.09375,
1694
+ "learning_rate": 2.7101769911504427e-05,
1695
+ "loss": 0.1341,
1696
+ "step": 2170
1697
+ },
1698
+ {
1699
+ "epoch": 2.3593073593073592,
1700
+ "grad_norm": 1.6796875,
1701
+ "learning_rate": 2.6991150442477875e-05,
1702
+ "loss": 0.1492,
1703
+ "step": 2180
1704
+ },
1705
+ {
1706
+ "epoch": 2.3701298701298703,
1707
+ "grad_norm": 1.8984375,
1708
+ "learning_rate": 2.688053097345133e-05,
1709
+ "loss": 0.1393,
1710
+ "step": 2190
1711
+ },
1712
+ {
1713
+ "epoch": 2.380952380952381,
1714
+ "grad_norm": 1.7890625,
1715
+ "learning_rate": 2.6769911504424782e-05,
1716
+ "loss": 0.1522,
1717
+ "step": 2200
1718
+ },
1719
+ {
1720
+ "epoch": 2.380952380952381,
1721
+ "eval_loss": 0.18994522094726562,
1722
+ "eval_runtime": 17.3426,
1723
+ "eval_samples_per_second": 28.831,
1724
+ "eval_steps_per_second": 0.231,
1725
+ "step": 2200
1726
+ },
1727
+ {
1728
+ "epoch": 2.391774891774892,
1729
+ "grad_norm": 1.3515625,
1730
+ "learning_rate": 2.665929203539823e-05,
1731
+ "loss": 0.1365,
1732
+ "step": 2210
1733
+ },
1734
+ {
1735
+ "epoch": 2.4025974025974026,
1736
+ "grad_norm": 0.96875,
1737
+ "learning_rate": 2.6548672566371686e-05,
1738
+ "loss": 0.144,
1739
+ "step": 2220
1740
+ },
1741
+ {
1742
+ "epoch": 2.4134199134199132,
1743
+ "grad_norm": 1.8828125,
1744
+ "learning_rate": 2.6438053097345134e-05,
1745
+ "loss": 0.1443,
1746
+ "step": 2230
1747
+ },
1748
+ {
1749
+ "epoch": 2.4242424242424243,
1750
+ "grad_norm": 1.3671875,
1751
+ "learning_rate": 2.6327433628318586e-05,
1752
+ "loss": 0.148,
1753
+ "step": 2240
1754
+ },
1755
+ {
1756
+ "epoch": 2.435064935064935,
1757
+ "grad_norm": 1.6484375,
1758
+ "learning_rate": 2.6216814159292035e-05,
1759
+ "loss": 0.1364,
1760
+ "step": 2250
1761
+ },
1762
+ {
1763
+ "epoch": 2.445887445887446,
1764
+ "grad_norm": 1.21875,
1765
+ "learning_rate": 2.610619469026549e-05,
1766
+ "loss": 0.1517,
1767
+ "step": 2260
1768
+ },
1769
+ {
1770
+ "epoch": 2.4567099567099566,
1771
+ "grad_norm": 1.546875,
1772
+ "learning_rate": 2.5995575221238938e-05,
1773
+ "loss": 0.152,
1774
+ "step": 2270
1775
+ },
1776
+ {
1777
+ "epoch": 2.4675324675324677,
1778
+ "grad_norm": 1.21875,
1779
+ "learning_rate": 2.5884955752212393e-05,
1780
+ "loss": 0.1321,
1781
+ "step": 2280
1782
+ },
1783
+ {
1784
+ "epoch": 2.4783549783549783,
1785
+ "grad_norm": 1.375,
1786
+ "learning_rate": 2.5774336283185842e-05,
1787
+ "loss": 0.1395,
1788
+ "step": 2290
1789
+ },
1790
+ {
1791
+ "epoch": 2.4891774891774894,
1792
+ "grad_norm": 1.671875,
1793
+ "learning_rate": 2.5663716814159294e-05,
1794
+ "loss": 0.1533,
1795
+ "step": 2300
1796
+ },
1797
+ {
1798
+ "epoch": 2.4891774891774894,
1799
+ "eval_loss": 0.19859272241592407,
1800
+ "eval_runtime": 14.0876,
1801
+ "eval_samples_per_second": 35.492,
1802
+ "eval_steps_per_second": 0.284,
1803
+ "step": 2300
1804
+ },
1805
+ {
1806
+ "epoch": 2.5,
1807
+ "grad_norm": 3.46875,
1808
+ "learning_rate": 2.5553097345132742e-05,
1809
+ "loss": 0.1539,
1810
+ "step": 2310
1811
+ },
1812
+ {
1813
+ "epoch": 2.5108225108225106,
1814
+ "grad_norm": 1.5234375,
1815
+ "learning_rate": 2.5442477876106197e-05,
1816
+ "loss": 0.1437,
1817
+ "step": 2320
1818
+ },
1819
+ {
1820
+ "epoch": 2.5216450216450217,
1821
+ "grad_norm": 1.4609375,
1822
+ "learning_rate": 2.5331858407079646e-05,
1823
+ "loss": 0.1275,
1824
+ "step": 2330
1825
+ },
1826
+ {
1827
+ "epoch": 2.5324675324675323,
1828
+ "grad_norm": 1.6640625,
1829
+ "learning_rate": 2.5221238938053098e-05,
1830
+ "loss": 0.1534,
1831
+ "step": 2340
1832
+ },
1833
+ {
1834
+ "epoch": 2.5432900432900434,
1835
+ "grad_norm": 2.453125,
1836
+ "learning_rate": 2.5110619469026546e-05,
1837
+ "loss": 0.1234,
1838
+ "step": 2350
1839
+ },
1840
+ {
1841
+ "epoch": 2.554112554112554,
1842
+ "grad_norm": 1.8984375,
1843
+ "learning_rate": 2.5e-05,
1844
+ "loss": 0.138,
1845
+ "step": 2360
1846
+ },
1847
+ {
1848
+ "epoch": 2.564935064935065,
1849
+ "grad_norm": 1.8984375,
1850
+ "learning_rate": 2.4889380530973453e-05,
1851
+ "loss": 0.1451,
1852
+ "step": 2370
1853
+ },
1854
+ {
1855
+ "epoch": 2.5757575757575757,
1856
+ "grad_norm": 1.1015625,
1857
+ "learning_rate": 2.4778761061946905e-05,
1858
+ "loss": 0.1459,
1859
+ "step": 2380
1860
+ },
1861
+ {
1862
+ "epoch": 2.5865800865800868,
1863
+ "grad_norm": 1.0625,
1864
+ "learning_rate": 2.4668141592920353e-05,
1865
+ "loss": 0.1495,
1866
+ "step": 2390
1867
+ },
1868
+ {
1869
+ "epoch": 2.5974025974025974,
1870
+ "grad_norm": 1.640625,
1871
+ "learning_rate": 2.4557522123893805e-05,
1872
+ "loss": 0.1586,
1873
+ "step": 2400
1874
+ },
1875
+ {
1876
+ "epoch": 2.5974025974025974,
1877
+ "eval_loss": 0.1879378855228424,
1878
+ "eval_runtime": 14.0304,
1879
+ "eval_samples_per_second": 35.637,
1880
+ "eval_steps_per_second": 0.285,
1881
+ "step": 2400
1882
+ },
1883
+ {
1884
+ "epoch": 2.608225108225108,
1885
+ "grad_norm": 1.5234375,
1886
+ "learning_rate": 2.4446902654867257e-05,
1887
+ "loss": 0.1375,
1888
+ "step": 2410
1889
+ },
1890
+ {
1891
+ "epoch": 2.619047619047619,
1892
+ "grad_norm": 1.3671875,
1893
+ "learning_rate": 2.433628318584071e-05,
1894
+ "loss": 0.1358,
1895
+ "step": 2420
1896
+ },
1897
+ {
1898
+ "epoch": 2.62987012987013,
1899
+ "grad_norm": 1.5546875,
1900
+ "learning_rate": 2.422566371681416e-05,
1901
+ "loss": 0.1368,
1902
+ "step": 2430
1903
+ },
1904
+ {
1905
+ "epoch": 2.6406926406926408,
1906
+ "grad_norm": 0.7890625,
1907
+ "learning_rate": 2.411504424778761e-05,
1908
+ "loss": 0.1326,
1909
+ "step": 2440
1910
+ },
1911
+ {
1912
+ "epoch": 2.6515151515151514,
1913
+ "grad_norm": 2.140625,
1914
+ "learning_rate": 2.4004424778761064e-05,
1915
+ "loss": 0.1396,
1916
+ "step": 2450
1917
+ },
1918
+ {
1919
+ "epoch": 2.6623376623376624,
1920
+ "grad_norm": 0.96484375,
1921
+ "learning_rate": 2.3893805309734516e-05,
1922
+ "loss": 0.1164,
1923
+ "step": 2460
1924
+ },
1925
+ {
1926
+ "epoch": 2.673160173160173,
1927
+ "grad_norm": 3.53125,
1928
+ "learning_rate": 2.3783185840707968e-05,
1929
+ "loss": 0.1333,
1930
+ "step": 2470
1931
+ },
1932
+ {
1933
+ "epoch": 2.683982683982684,
1934
+ "grad_norm": 1.3046875,
1935
+ "learning_rate": 2.3672566371681416e-05,
1936
+ "loss": 0.1507,
1937
+ "step": 2480
1938
+ },
1939
+ {
1940
+ "epoch": 2.6948051948051948,
1941
+ "grad_norm": 1.1015625,
1942
+ "learning_rate": 2.3561946902654868e-05,
1943
+ "loss": 0.1384,
1944
+ "step": 2490
1945
+ },
1946
+ {
1947
+ "epoch": 2.7056277056277054,
1948
+ "grad_norm": 1.53125,
1949
+ "learning_rate": 2.345132743362832e-05,
1950
+ "loss": 0.1698,
1951
+ "step": 2500
1952
+ },
1953
+ {
1954
+ "epoch": 2.7056277056277054,
1955
+ "eval_loss": 0.19169363379478455,
1956
+ "eval_runtime": 14.0148,
1957
+ "eval_samples_per_second": 35.676,
1958
+ "eval_steps_per_second": 0.285,
1959
+ "step": 2500
1960
+ },
1961
+ {
1962
+ "epoch": 2.7164502164502164,
1963
+ "grad_norm": 1.53125,
1964
+ "learning_rate": 2.334070796460177e-05,
1965
+ "loss": 0.1406,
1966
+ "step": 2510
1967
+ },
1968
+ {
1969
+ "epoch": 2.7272727272727275,
1970
+ "grad_norm": 0.94140625,
1971
+ "learning_rate": 2.3230088495575223e-05,
1972
+ "loss": 0.1476,
1973
+ "step": 2520
1974
+ },
1975
+ {
1976
+ "epoch": 2.738095238095238,
1977
+ "grad_norm": 1.3515625,
1978
+ "learning_rate": 2.3119469026548672e-05,
1979
+ "loss": 0.129,
1980
+ "step": 2530
1981
+ },
1982
+ {
1983
+ "epoch": 2.7489177489177488,
1984
+ "grad_norm": 1.4765625,
1985
+ "learning_rate": 2.3008849557522124e-05,
1986
+ "loss": 0.136,
1987
+ "step": 2540
1988
+ },
1989
+ {
1990
+ "epoch": 2.75974025974026,
1991
+ "grad_norm": 1.8125,
1992
+ "learning_rate": 2.2898230088495576e-05,
1993
+ "loss": 0.1436,
1994
+ "step": 2550
1995
+ },
1996
+ {
1997
+ "epoch": 2.7705627705627704,
1998
+ "grad_norm": 1.34375,
1999
+ "learning_rate": 2.2787610619469027e-05,
2000
+ "loss": 0.1356,
2001
+ "step": 2560
2002
+ },
2003
+ {
2004
+ "epoch": 2.7813852813852815,
2005
+ "grad_norm": 2.28125,
2006
+ "learning_rate": 2.267699115044248e-05,
2007
+ "loss": 0.1455,
2008
+ "step": 2570
2009
+ },
2010
+ {
2011
+ "epoch": 2.792207792207792,
2012
+ "grad_norm": 1.140625,
2013
+ "learning_rate": 2.2566371681415928e-05,
2014
+ "loss": 0.1369,
2015
+ "step": 2580
2016
+ },
2017
+ {
2018
+ "epoch": 2.8030303030303028,
2019
+ "grad_norm": 2.09375,
2020
+ "learning_rate": 2.245575221238938e-05,
2021
+ "loss": 0.1433,
2022
+ "step": 2590
2023
+ },
2024
+ {
2025
+ "epoch": 2.813852813852814,
2026
+ "grad_norm": 1.2421875,
2027
+ "learning_rate": 2.234513274336283e-05,
2028
+ "loss": 0.1215,
2029
+ "step": 2600
2030
+ },
2031
+ {
2032
+ "epoch": 2.813852813852814,
2033
+ "eval_loss": 0.1945955455303192,
2034
+ "eval_runtime": 14.0272,
2035
+ "eval_samples_per_second": 35.645,
2036
+ "eval_steps_per_second": 0.285,
2037
+ "step": 2600
2038
+ },
2039
+ {
2040
+ "epoch": 2.824675324675325,
2041
+ "grad_norm": 1.5,
2042
+ "learning_rate": 2.2234513274336286e-05,
2043
+ "loss": 0.1342,
2044
+ "step": 2610
2045
+ },
2046
+ {
2047
+ "epoch": 2.8354978354978355,
2048
+ "grad_norm": 1.3828125,
2049
+ "learning_rate": 2.2123893805309738e-05,
2050
+ "loss": 0.134,
2051
+ "step": 2620
2052
+ },
2053
+ {
2054
+ "epoch": 2.846320346320346,
2055
+ "grad_norm": 1.4140625,
2056
+ "learning_rate": 2.2013274336283187e-05,
2057
+ "loss": 0.1443,
2058
+ "step": 2630
2059
+ },
2060
+ {
2061
+ "epoch": 2.857142857142857,
2062
+ "grad_norm": 0.94140625,
2063
+ "learning_rate": 2.190265486725664e-05,
2064
+ "loss": 0.1287,
2065
+ "step": 2640
2066
+ },
2067
+ {
2068
+ "epoch": 2.867965367965368,
2069
+ "grad_norm": 1.2578125,
2070
+ "learning_rate": 2.179203539823009e-05,
2071
+ "loss": 0.1423,
2072
+ "step": 2650
2073
+ },
2074
+ {
2075
+ "epoch": 2.878787878787879,
2076
+ "grad_norm": 1.8203125,
2077
+ "learning_rate": 2.1681415929203542e-05,
2078
+ "loss": 0.1428,
2079
+ "step": 2660
2080
+ },
2081
+ {
2082
+ "epoch": 2.8896103896103895,
2083
+ "grad_norm": 1.578125,
2084
+ "learning_rate": 2.1570796460176994e-05,
2085
+ "loss": 0.1324,
2086
+ "step": 2670
2087
+ },
2088
+ {
2089
+ "epoch": 2.9004329004329006,
2090
+ "grad_norm": 1.234375,
2091
+ "learning_rate": 2.1460176991150442e-05,
2092
+ "loss": 0.1405,
2093
+ "step": 2680
2094
+ },
2095
+ {
2096
+ "epoch": 2.911255411255411,
2097
+ "grad_norm": 1.3203125,
2098
+ "learning_rate": 2.1349557522123894e-05,
2099
+ "loss": 0.1545,
2100
+ "step": 2690
2101
+ },
2102
+ {
2103
+ "epoch": 2.9220779220779223,
2104
+ "grad_norm": 1.703125,
2105
+ "learning_rate": 2.1238938053097346e-05,
2106
+ "loss": 0.1354,
2107
+ "step": 2700
2108
+ },
2109
+ {
2110
+ "epoch": 2.9220779220779223,
2111
+ "eval_loss": 0.1839994490146637,
2112
+ "eval_runtime": 13.8953,
2113
+ "eval_samples_per_second": 35.983,
2114
+ "eval_steps_per_second": 0.288,
2115
+ "step": 2700
2116
+ },
2117
+ {
2118
+ "epoch": 2.932900432900433,
2119
+ "grad_norm": 1.5859375,
2120
+ "learning_rate": 2.1128318584070798e-05,
2121
+ "loss": 0.1467,
2122
+ "step": 2710
2123
+ },
2124
+ {
2125
+ "epoch": 2.9437229437229435,
2126
+ "grad_norm": 1.59375,
2127
+ "learning_rate": 2.101769911504425e-05,
2128
+ "loss": 0.1548,
2129
+ "step": 2720
2130
+ },
2131
+ {
2132
+ "epoch": 2.9545454545454546,
2133
+ "grad_norm": 1.3984375,
2134
+ "learning_rate": 2.0907079646017698e-05,
2135
+ "loss": 0.1498,
2136
+ "step": 2730
2137
+ },
2138
+ {
2139
+ "epoch": 2.965367965367965,
2140
+ "grad_norm": 1.5859375,
2141
+ "learning_rate": 2.079646017699115e-05,
2142
+ "loss": 0.1484,
2143
+ "step": 2740
2144
+ },
2145
+ {
2146
+ "epoch": 2.9761904761904763,
2147
+ "grad_norm": 1.5546875,
2148
+ "learning_rate": 2.0685840707964602e-05,
2149
+ "loss": 0.1409,
2150
+ "step": 2750
2151
+ },
2152
+ {
2153
+ "epoch": 2.987012987012987,
2154
+ "grad_norm": 1.7421875,
2155
+ "learning_rate": 2.0575221238938054e-05,
2156
+ "loss": 0.1482,
2157
+ "step": 2760
2158
+ },
2159
+ {
2160
+ "epoch": 2.997835497835498,
2161
+ "grad_norm": 2.140625,
2162
+ "learning_rate": 2.0464601769911505e-05,
2163
+ "loss": 0.1456,
2164
+ "step": 2770
2165
+ },
2166
+ {
2167
+ "epoch": 3.0086580086580086,
2168
+ "grad_norm": 1.4765625,
2169
+ "learning_rate": 2.0353982300884957e-05,
2170
+ "loss": 0.111,
2171
+ "step": 2780
2172
+ },
2173
+ {
2174
+ "epoch": 3.0194805194805197,
2175
+ "grad_norm": 1.625,
2176
+ "learning_rate": 2.024336283185841e-05,
2177
+ "loss": 0.1259,
2178
+ "step": 2790
2179
+ },
2180
+ {
2181
+ "epoch": 3.0303030303030303,
2182
+ "grad_norm": 1.359375,
2183
+ "learning_rate": 2.013274336283186e-05,
2184
+ "loss": 0.1234,
2185
+ "step": 2800
2186
+ },
2187
+ {
2188
+ "epoch": 3.0303030303030303,
2189
+ "eval_loss": 0.18937236070632935,
2190
+ "eval_runtime": 13.8239,
2191
+ "eval_samples_per_second": 36.169,
2192
+ "eval_steps_per_second": 0.289,
2193
+ "step": 2800
2194
+ },
2195
+ {
2196
+ "epoch": 3.0411255411255413,
2197
+ "grad_norm": 1.2734375,
2198
+ "learning_rate": 2.0022123893805313e-05,
2199
+ "loss": 0.1339,
2200
+ "step": 2810
2201
+ },
2202
+ {
2203
+ "epoch": 3.051948051948052,
2204
+ "grad_norm": 1.609375,
2205
+ "learning_rate": 1.991150442477876e-05,
2206
+ "loss": 0.1276,
2207
+ "step": 2820
2208
+ },
2209
+ {
2210
+ "epoch": 3.0627705627705626,
2211
+ "grad_norm": 1.875,
2212
+ "learning_rate": 1.9800884955752213e-05,
2213
+ "loss": 0.1226,
2214
+ "step": 2830
2215
+ },
2216
+ {
2217
+ "epoch": 3.0735930735930737,
2218
+ "grad_norm": 1.078125,
2219
+ "learning_rate": 1.9690265486725665e-05,
2220
+ "loss": 0.1273,
2221
+ "step": 2840
2222
+ },
2223
+ {
2224
+ "epoch": 3.0844155844155843,
2225
+ "grad_norm": 1.4765625,
2226
+ "learning_rate": 1.9579646017699117e-05,
2227
+ "loss": 0.1352,
2228
+ "step": 2850
2229
+ },
2230
+ {
2231
+ "epoch": 3.0952380952380953,
2232
+ "grad_norm": 1.3203125,
2233
+ "learning_rate": 1.946902654867257e-05,
2234
+ "loss": 0.1124,
2235
+ "step": 2860
2236
+ },
2237
+ {
2238
+ "epoch": 3.106060606060606,
2239
+ "grad_norm": 1.125,
2240
+ "learning_rate": 1.9358407079646017e-05,
2241
+ "loss": 0.1174,
2242
+ "step": 2870
2243
+ },
2244
+ {
2245
+ "epoch": 3.116883116883117,
2246
+ "grad_norm": 1.59375,
2247
+ "learning_rate": 1.924778761061947e-05,
2248
+ "loss": 0.1409,
2249
+ "step": 2880
2250
+ },
2251
+ {
2252
+ "epoch": 3.1277056277056277,
2253
+ "grad_norm": 1.34375,
2254
+ "learning_rate": 1.913716814159292e-05,
2255
+ "loss": 0.1205,
2256
+ "step": 2890
2257
+ },
2258
+ {
2259
+ "epoch": 3.1385281385281387,
2260
+ "grad_norm": 1.2109375,
2261
+ "learning_rate": 1.9026548672566372e-05,
2262
+ "loss": 0.1217,
2263
+ "step": 2900
2264
+ },
2265
+ {
2266
+ "epoch": 3.1385281385281387,
2267
+ "eval_loss": 0.18834540247917175,
2268
+ "eval_runtime": 13.8797,
2269
+ "eval_samples_per_second": 36.024,
2270
+ "eval_steps_per_second": 0.288,
2271
+ "step": 2900
2272
+ },
2273
+ {
2274
+ "epoch": 3.1493506493506493,
2275
+ "grad_norm": 1.7734375,
2276
+ "learning_rate": 1.8915929203539824e-05,
2277
+ "loss": 0.1373,
2278
+ "step": 2910
2279
+ },
2280
+ {
2281
+ "epoch": 3.16017316017316,
2282
+ "grad_norm": 1.203125,
2283
+ "learning_rate": 1.8805309734513272e-05,
2284
+ "loss": 0.1187,
2285
+ "step": 2920
2286
+ },
2287
+ {
2288
+ "epoch": 3.170995670995671,
2289
+ "grad_norm": 1.421875,
2290
+ "learning_rate": 1.8694690265486728e-05,
2291
+ "loss": 0.1196,
2292
+ "step": 2930
2293
+ },
2294
+ {
2295
+ "epoch": 3.1818181818181817,
2296
+ "grad_norm": 1.671875,
2297
+ "learning_rate": 1.858407079646018e-05,
2298
+ "loss": 0.1343,
2299
+ "step": 2940
2300
+ },
2301
+ {
2302
+ "epoch": 3.1926406926406927,
2303
+ "grad_norm": 1.453125,
2304
+ "learning_rate": 1.847345132743363e-05,
2305
+ "loss": 0.1267,
2306
+ "step": 2950
2307
+ },
2308
+ {
2309
+ "epoch": 3.2034632034632033,
2310
+ "grad_norm": 1.8984375,
2311
+ "learning_rate": 1.836283185840708e-05,
2312
+ "loss": 0.1261,
2313
+ "step": 2960
2314
+ },
2315
+ {
2316
+ "epoch": 3.2142857142857144,
2317
+ "grad_norm": 1.484375,
2318
+ "learning_rate": 1.825221238938053e-05,
2319
+ "loss": 0.1202,
2320
+ "step": 2970
2321
+ },
2322
+ {
2323
+ "epoch": 3.225108225108225,
2324
+ "grad_norm": 1.1640625,
2325
+ "learning_rate": 1.8141592920353983e-05,
2326
+ "loss": 0.1259,
2327
+ "step": 2980
2328
+ },
2329
+ {
2330
+ "epoch": 3.235930735930736,
2331
+ "grad_norm": 1.515625,
2332
+ "learning_rate": 1.8030973451327435e-05,
2333
+ "loss": 0.1208,
2334
+ "step": 2990
2335
+ },
2336
+ {
2337
+ "epoch": 3.2467532467532467,
2338
+ "grad_norm": 1.1875,
2339
+ "learning_rate": 1.7920353982300887e-05,
2340
+ "loss": 0.1363,
2341
+ "step": 3000
2342
+ },
2343
+ {
2344
+ "epoch": 3.2467532467532467,
2345
+ "eval_loss": 0.19392167031764984,
2346
+ "eval_runtime": 14.0702,
2347
+ "eval_samples_per_second": 35.536,
2348
+ "eval_steps_per_second": 0.284,
2349
+ "step": 3000
2350
+ },
2351
+ {
2352
+ "epoch": 3.257575757575758,
2353
+ "grad_norm": 1.2265625,
2354
+ "learning_rate": 1.7809734513274335e-05,
2355
+ "loss": 0.1169,
2356
+ "step": 3010
2357
+ },
2358
+ {
2359
+ "epoch": 3.2683982683982684,
2360
+ "grad_norm": 1.765625,
2361
+ "learning_rate": 1.7699115044247787e-05,
2362
+ "loss": 0.1279,
2363
+ "step": 3020
2364
+ },
2365
+ {
2366
+ "epoch": 3.279220779220779,
2367
+ "grad_norm": 1.4140625,
2368
+ "learning_rate": 1.758849557522124e-05,
2369
+ "loss": 0.127,
2370
+ "step": 3030
2371
+ },
2372
+ {
2373
+ "epoch": 3.29004329004329,
2374
+ "grad_norm": 1.53125,
2375
+ "learning_rate": 1.747787610619469e-05,
2376
+ "loss": 0.1294,
2377
+ "step": 3040
2378
+ },
2379
+ {
2380
+ "epoch": 3.3008658008658007,
2381
+ "grad_norm": 1.9765625,
2382
+ "learning_rate": 1.7367256637168143e-05,
2383
+ "loss": 0.1244,
2384
+ "step": 3050
2385
+ },
2386
+ {
2387
+ "epoch": 3.311688311688312,
2388
+ "grad_norm": 0.7109375,
2389
+ "learning_rate": 1.7256637168141594e-05,
2390
+ "loss": 0.1194,
2391
+ "step": 3060
2392
+ },
2393
+ {
2394
+ "epoch": 3.3225108225108224,
2395
+ "grad_norm": 1.296875,
2396
+ "learning_rate": 1.7146017699115043e-05,
2397
+ "loss": 0.1284,
2398
+ "step": 3070
2399
+ },
2400
+ {
2401
+ "epoch": 3.3333333333333335,
2402
+ "grad_norm": 0.93359375,
2403
+ "learning_rate": 1.7035398230088495e-05,
2404
+ "loss": 0.1359,
2405
+ "step": 3080
2406
+ },
2407
+ {
2408
+ "epoch": 3.344155844155844,
2409
+ "grad_norm": 1.203125,
2410
+ "learning_rate": 1.692477876106195e-05,
2411
+ "loss": 0.1318,
2412
+ "step": 3090
2413
+ },
2414
+ {
2415
+ "epoch": 3.354978354978355,
2416
+ "grad_norm": 1.4453125,
2417
+ "learning_rate": 1.6814159292035402e-05,
2418
+ "loss": 0.1277,
2419
+ "step": 3100
2420
+ },
2421
+ {
2422
+ "epoch": 3.354978354978355,
2423
+ "eval_loss": 0.18987847864627838,
2424
+ "eval_runtime": 13.8933,
2425
+ "eval_samples_per_second": 35.989,
2426
+ "eval_steps_per_second": 0.288,
2427
+ "step": 3100
2428
+ },
2429
+ {
2430
+ "epoch": 3.365800865800866,
2431
+ "grad_norm": 1.25,
2432
+ "learning_rate": 1.670353982300885e-05,
2433
+ "loss": 0.1164,
2434
+ "step": 3110
2435
+ },
2436
+ {
2437
+ "epoch": 3.3766233766233764,
2438
+ "grad_norm": 1.3125,
2439
+ "learning_rate": 1.6592920353982302e-05,
2440
+ "loss": 0.125,
2441
+ "step": 3120
2442
+ },
2443
+ {
2444
+ "epoch": 3.3874458874458875,
2445
+ "grad_norm": 1.1796875,
2446
+ "learning_rate": 1.6482300884955754e-05,
2447
+ "loss": 0.1261,
2448
+ "step": 3130
2449
+ },
2450
+ {
2451
+ "epoch": 3.398268398268398,
2452
+ "grad_norm": 1.25,
2453
+ "learning_rate": 1.6371681415929206e-05,
2454
+ "loss": 0.126,
2455
+ "step": 3140
2456
+ },
2457
+ {
2458
+ "epoch": 3.409090909090909,
2459
+ "grad_norm": 1.59375,
2460
+ "learning_rate": 1.6261061946902657e-05,
2461
+ "loss": 0.1268,
2462
+ "step": 3150
2463
+ },
2464
+ {
2465
+ "epoch": 3.41991341991342,
2466
+ "grad_norm": 1.5390625,
2467
+ "learning_rate": 1.6150442477876106e-05,
2468
+ "loss": 0.1239,
2469
+ "step": 3160
2470
+ },
2471
+ {
2472
+ "epoch": 3.430735930735931,
2473
+ "grad_norm": 1.859375,
2474
+ "learning_rate": 1.6039823008849558e-05,
2475
+ "loss": 0.11,
2476
+ "step": 3170
2477
+ },
2478
+ {
2479
+ "epoch": 3.4415584415584415,
2480
+ "grad_norm": 0.82421875,
2481
+ "learning_rate": 1.592920353982301e-05,
2482
+ "loss": 0.1243,
2483
+ "step": 3180
2484
+ },
2485
+ {
2486
+ "epoch": 3.4523809523809526,
2487
+ "grad_norm": 1.4140625,
2488
+ "learning_rate": 1.581858407079646e-05,
2489
+ "loss": 0.1257,
2490
+ "step": 3190
2491
+ },
2492
+ {
2493
+ "epoch": 3.463203463203463,
2494
+ "grad_norm": 1.0078125,
2495
+ "learning_rate": 1.5707964601769913e-05,
2496
+ "loss": 0.1231,
2497
+ "step": 3200
2498
+ },
2499
+ {
2500
+ "epoch": 3.463203463203463,
2501
+ "eval_loss": 0.19035659730434418,
2502
+ "eval_runtime": 14.1973,
2503
+ "eval_samples_per_second": 35.218,
2504
+ "eval_steps_per_second": 0.282,
2505
+ "step": 3200
2506
+ },
2507
+ {
2508
+ "epoch": 3.474025974025974,
2509
+ "grad_norm": 1.4921875,
2510
+ "learning_rate": 1.559734513274336e-05,
2511
+ "loss": 0.1161,
2512
+ "step": 3210
2513
+ },
2514
+ {
2515
+ "epoch": 3.484848484848485,
2516
+ "grad_norm": 1.6484375,
2517
+ "learning_rate": 1.5486725663716813e-05,
2518
+ "loss": 0.1275,
2519
+ "step": 3220
2520
+ },
2521
+ {
2522
+ "epoch": 3.4956709956709955,
2523
+ "grad_norm": 1.171875,
2524
+ "learning_rate": 1.5376106194690265e-05,
2525
+ "loss": 0.1177,
2526
+ "step": 3230
2527
+ },
2528
+ {
2529
+ "epoch": 3.5064935064935066,
2530
+ "grad_norm": 1.25,
2531
+ "learning_rate": 1.5265486725663717e-05,
2532
+ "loss": 0.1269,
2533
+ "step": 3240
2534
+ },
2535
+ {
2536
+ "epoch": 3.517316017316017,
2537
+ "grad_norm": 1.4375,
2538
+ "learning_rate": 1.515486725663717e-05,
2539
+ "loss": 0.1295,
2540
+ "step": 3250
2541
+ },
2542
+ {
2543
+ "epoch": 3.5281385281385282,
2544
+ "grad_norm": 1.34375,
2545
+ "learning_rate": 1.504424778761062e-05,
2546
+ "loss": 0.1183,
2547
+ "step": 3260
2548
+ },
2549
+ {
2550
+ "epoch": 3.538961038961039,
2551
+ "grad_norm": 1.4921875,
2552
+ "learning_rate": 1.4933628318584072e-05,
2553
+ "loss": 0.1146,
2554
+ "step": 3270
2555
+ },
2556
+ {
2557
+ "epoch": 3.54978354978355,
2558
+ "grad_norm": 1.1796875,
2559
+ "learning_rate": 1.4823008849557524e-05,
2560
+ "loss": 0.1149,
2561
+ "step": 3280
2562
+ },
2563
+ {
2564
+ "epoch": 3.5606060606060606,
2565
+ "grad_norm": 1.2421875,
2566
+ "learning_rate": 1.4712389380530974e-05,
2567
+ "loss": 0.1276,
2568
+ "step": 3290
2569
+ },
2570
+ {
2571
+ "epoch": 3.571428571428571,
2572
+ "grad_norm": 0.86328125,
2573
+ "learning_rate": 1.4601769911504426e-05,
2574
+ "loss": 0.0958,
2575
+ "step": 3300
2576
+ },
2577
+ {
2578
+ "epoch": 3.571428571428571,
2579
+ "eval_loss": 0.18995501101016998,
2580
+ "eval_runtime": 18.7862,
2581
+ "eval_samples_per_second": 26.615,
2582
+ "eval_steps_per_second": 0.213,
2583
+ "step": 3300
2584
+ },
2585
+ {
2586
+ "epoch": 3.5822510822510822,
2587
+ "grad_norm": 1.4609375,
2588
+ "learning_rate": 1.4491150442477876e-05,
2589
+ "loss": 0.1257,
2590
+ "step": 3310
2591
+ },
2592
+ {
2593
+ "epoch": 3.5930735930735933,
2594
+ "grad_norm": 1.75,
2595
+ "learning_rate": 1.4380530973451328e-05,
2596
+ "loss": 0.1379,
2597
+ "step": 3320
2598
+ },
2599
+ {
2600
+ "epoch": 3.603896103896104,
2601
+ "grad_norm": 1.359375,
2602
+ "learning_rate": 1.426991150442478e-05,
2603
+ "loss": 0.1359,
2604
+ "step": 3330
2605
+ },
2606
+ {
2607
+ "epoch": 3.6147186147186146,
2608
+ "grad_norm": 0.97265625,
2609
+ "learning_rate": 1.415929203539823e-05,
2610
+ "loss": 0.1071,
2611
+ "step": 3340
2612
+ },
2613
+ {
2614
+ "epoch": 3.6255411255411256,
2615
+ "grad_norm": 2.625,
2616
+ "learning_rate": 1.4048672566371682e-05,
2617
+ "loss": 0.1231,
2618
+ "step": 3350
2619
+ },
2620
+ {
2621
+ "epoch": 3.6363636363636362,
2622
+ "grad_norm": 1.59375,
2623
+ "learning_rate": 1.3938053097345132e-05,
2624
+ "loss": 0.1234,
2625
+ "step": 3360
2626
+ },
2627
+ {
2628
+ "epoch": 3.6471861471861473,
2629
+ "grad_norm": 1.140625,
2630
+ "learning_rate": 1.3827433628318584e-05,
2631
+ "loss": 0.1287,
2632
+ "step": 3370
2633
+ },
2634
+ {
2635
+ "epoch": 3.658008658008658,
2636
+ "grad_norm": 1.3984375,
2637
+ "learning_rate": 1.3716814159292036e-05,
2638
+ "loss": 0.1345,
2639
+ "step": 3380
2640
+ },
2641
+ {
2642
+ "epoch": 3.6688311688311686,
2643
+ "grad_norm": 1.640625,
2644
+ "learning_rate": 1.3606194690265486e-05,
2645
+ "loss": 0.146,
2646
+ "step": 3390
2647
+ },
2648
+ {
2649
+ "epoch": 3.6796536796536796,
2650
+ "grad_norm": 1.640625,
2651
+ "learning_rate": 1.3495575221238938e-05,
2652
+ "loss": 0.1321,
2653
+ "step": 3400
2654
+ },
2655
+ {
2656
+ "epoch": 3.6796536796536796,
2657
+ "eval_loss": 0.19218742847442627,
2658
+ "eval_runtime": 13.835,
2659
+ "eval_samples_per_second": 36.14,
2660
+ "eval_steps_per_second": 0.289,
2661
+ "step": 3400
2662
+ },
2663
+ {
2664
+ "epoch": 3.6904761904761907,
2665
+ "grad_norm": 1.3984375,
2666
+ "learning_rate": 1.3384955752212391e-05,
2667
+ "loss": 0.1269,
2668
+ "step": 3410
2669
+ },
2670
+ {
2671
+ "epoch": 3.7012987012987013,
2672
+ "grad_norm": 1.9296875,
2673
+ "learning_rate": 1.3274336283185843e-05,
2674
+ "loss": 0.1216,
2675
+ "step": 3420
2676
+ },
2677
+ {
2678
+ "epoch": 3.712121212121212,
2679
+ "grad_norm": 1.0234375,
2680
+ "learning_rate": 1.3163716814159293e-05,
2681
+ "loss": 0.1096,
2682
+ "step": 3430
2683
+ },
2684
+ {
2685
+ "epoch": 3.722943722943723,
2686
+ "grad_norm": 1.3125,
2687
+ "learning_rate": 1.3053097345132745e-05,
2688
+ "loss": 0.1422,
2689
+ "step": 3440
2690
+ },
2691
+ {
2692
+ "epoch": 3.7337662337662336,
2693
+ "grad_norm": 1.046875,
2694
+ "learning_rate": 1.2942477876106197e-05,
2695
+ "loss": 0.1316,
2696
+ "step": 3450
2697
+ },
2698
+ {
2699
+ "epoch": 3.7445887445887447,
2700
+ "grad_norm": 1.6015625,
2701
+ "learning_rate": 1.2831858407079647e-05,
2702
+ "loss": 0.1198,
2703
+ "step": 3460
2704
+ },
2705
+ {
2706
+ "epoch": 3.7554112554112553,
2707
+ "grad_norm": 1.5078125,
2708
+ "learning_rate": 1.2721238938053099e-05,
2709
+ "loss": 0.1251,
2710
+ "step": 3470
2711
+ },
2712
+ {
2713
+ "epoch": 3.7662337662337664,
2714
+ "grad_norm": 1.1015625,
2715
+ "learning_rate": 1.2610619469026549e-05,
2716
+ "loss": 0.1198,
2717
+ "step": 3480
2718
+ },
2719
+ {
2720
+ "epoch": 3.777056277056277,
2721
+ "grad_norm": 0.93359375,
2722
+ "learning_rate": 1.25e-05,
2723
+ "loss": 0.1195,
2724
+ "step": 3490
2725
+ },
2726
+ {
2727
+ "epoch": 3.787878787878788,
2728
+ "grad_norm": 1.7265625,
2729
+ "learning_rate": 1.2389380530973452e-05,
2730
+ "loss": 0.1052,
2731
+ "step": 3500
2732
+ },
2733
+ {
2734
+ "epoch": 3.787878787878788,
2735
+ "eval_loss": 0.19286279380321503,
2736
+ "eval_runtime": 14.0626,
2737
+ "eval_samples_per_second": 35.555,
2738
+ "eval_steps_per_second": 0.284,
2739
+ "step": 3500
2740
+ },
2741
+ {
2742
+ "epoch": 3.7987012987012987,
2743
+ "grad_norm": 1.3046875,
2744
+ "learning_rate": 1.2278761061946903e-05,
2745
+ "loss": 0.1246,
2746
+ "step": 3510
2747
+ },
2748
+ {
2749
+ "epoch": 3.8095238095238093,
2750
+ "grad_norm": 1.0859375,
2751
+ "learning_rate": 1.2168141592920354e-05,
2752
+ "loss": 0.1159,
2753
+ "step": 3520
2754
+ },
2755
+ {
2756
+ "epoch": 3.8203463203463204,
2757
+ "grad_norm": 1.3828125,
2758
+ "learning_rate": 1.2057522123893804e-05,
2759
+ "loss": 0.129,
2760
+ "step": 3530
2761
+ },
2762
+ {
2763
+ "epoch": 3.8311688311688314,
2764
+ "grad_norm": 2.40625,
2765
+ "learning_rate": 1.1946902654867258e-05,
2766
+ "loss": 0.1177,
2767
+ "step": 3540
2768
+ },
2769
+ {
2770
+ "epoch": 3.841991341991342,
2771
+ "grad_norm": 1.453125,
2772
+ "learning_rate": 1.1836283185840708e-05,
2773
+ "loss": 0.1333,
2774
+ "step": 3550
2775
+ },
2776
+ {
2777
+ "epoch": 3.8528138528138527,
2778
+ "grad_norm": 1.296875,
2779
+ "learning_rate": 1.172566371681416e-05,
2780
+ "loss": 0.1287,
2781
+ "step": 3560
2782
+ },
2783
+ {
2784
+ "epoch": 3.8636363636363638,
2785
+ "grad_norm": 2.234375,
2786
+ "learning_rate": 1.1615044247787612e-05,
2787
+ "loss": 0.1284,
2788
+ "step": 3570
2789
+ },
2790
+ {
2791
+ "epoch": 3.8744588744588744,
2792
+ "grad_norm": 1.4921875,
2793
+ "learning_rate": 1.1504424778761062e-05,
2794
+ "loss": 0.1257,
2795
+ "step": 3580
2796
+ },
2797
+ {
2798
+ "epoch": 3.8852813852813854,
2799
+ "grad_norm": 1.5625,
2800
+ "learning_rate": 1.1393805309734514e-05,
2801
+ "loss": 0.1157,
2802
+ "step": 3590
2803
+ },
2804
+ {
2805
+ "epoch": 3.896103896103896,
2806
+ "grad_norm": 1.421875,
2807
+ "learning_rate": 1.1283185840707964e-05,
2808
+ "loss": 0.1106,
2809
+ "step": 3600
2810
+ },
2811
+ {
2812
+ "epoch": 3.896103896103896,
2813
+ "eval_loss": 0.19091036915779114,
2814
+ "eval_runtime": 14.4935,
2815
+ "eval_samples_per_second": 34.498,
2816
+ "eval_steps_per_second": 0.276,
2817
+ "step": 3600
2818
+ },
2819
+ {
2820
+ "epoch": 3.9069264069264067,
2821
+ "grad_norm": 1.5859375,
2822
+ "learning_rate": 1.1172566371681416e-05,
2823
+ "loss": 0.123,
2824
+ "step": 3610
2825
+ },
2826
+ {
2827
+ "epoch": 3.9177489177489178,
2828
+ "grad_norm": 1.8046875,
2829
+ "learning_rate": 1.1061946902654869e-05,
2830
+ "loss": 0.1299,
2831
+ "step": 3620
2832
+ },
2833
+ {
2834
+ "epoch": 3.928571428571429,
2835
+ "grad_norm": 1.5234375,
2836
+ "learning_rate": 1.095132743362832e-05,
2837
+ "loss": 0.1227,
2838
+ "step": 3630
2839
+ },
2840
+ {
2841
+ "epoch": 3.9393939393939394,
2842
+ "grad_norm": 1.421875,
2843
+ "learning_rate": 1.0840707964601771e-05,
2844
+ "loss": 0.1239,
2845
+ "step": 3640
2846
+ },
2847
+ {
2848
+ "epoch": 3.95021645021645,
2849
+ "grad_norm": 0.92578125,
2850
+ "learning_rate": 1.0730088495575221e-05,
2851
+ "loss": 0.1251,
2852
+ "step": 3650
2853
+ },
2854
+ {
2855
+ "epoch": 3.961038961038961,
2856
+ "grad_norm": 0.80078125,
2857
+ "learning_rate": 1.0619469026548673e-05,
2858
+ "loss": 0.1121,
2859
+ "step": 3660
2860
+ },
2861
+ {
2862
+ "epoch": 3.9718614718614718,
2863
+ "grad_norm": 1.4609375,
2864
+ "learning_rate": 1.0508849557522125e-05,
2865
+ "loss": 0.1298,
2866
+ "step": 3670
2867
+ },
2868
+ {
2869
+ "epoch": 3.982683982683983,
2870
+ "grad_norm": 1.1953125,
2871
+ "learning_rate": 1.0398230088495575e-05,
2872
+ "loss": 0.1129,
2873
+ "step": 3680
2874
+ },
2875
+ {
2876
+ "epoch": 3.9935064935064934,
2877
+ "grad_norm": 1.9765625,
2878
+ "learning_rate": 1.0287610619469027e-05,
2879
+ "loss": 0.1214,
2880
+ "step": 3690
2881
+ },
2882
+ {
2883
+ "epoch": 4.004329004329004,
2884
+ "grad_norm": 1.15625,
2885
+ "learning_rate": 1.0176991150442479e-05,
2886
+ "loss": 0.1207,
2887
+ "step": 3700
2888
+ },
2889
+ {
2890
+ "epoch": 4.004329004329004,
2891
+ "eval_loss": 0.19235384464263916,
2892
+ "eval_runtime": 14.3348,
2893
+ "eval_samples_per_second": 34.88,
2894
+ "eval_steps_per_second": 0.279,
2895
+ "step": 3700
2896
+ },
2897
+ {
2898
+ "epoch": 4.015151515151516,
2899
+ "grad_norm": 1.6015625,
2900
+ "learning_rate": 1.006637168141593e-05,
2901
+ "loss": 0.1167,
2902
+ "step": 3710
2903
+ },
2904
+ {
2905
+ "epoch": 4.025974025974026,
2906
+ "grad_norm": 1.1875,
2907
+ "learning_rate": 9.95575221238938e-06,
2908
+ "loss": 0.1195,
2909
+ "step": 3720
2910
+ },
2911
+ {
2912
+ "epoch": 4.036796536796537,
2913
+ "grad_norm": 2.03125,
2914
+ "learning_rate": 9.845132743362832e-06,
2915
+ "loss": 0.1171,
2916
+ "step": 3730
2917
+ },
2918
+ {
2919
+ "epoch": 4.0476190476190474,
2920
+ "grad_norm": 1.3828125,
2921
+ "learning_rate": 9.734513274336284e-06,
2922
+ "loss": 0.1176,
2923
+ "step": 3740
2924
+ },
2925
+ {
2926
+ "epoch": 4.058441558441558,
2927
+ "grad_norm": 1.1953125,
2928
+ "learning_rate": 9.623893805309734e-06,
2929
+ "loss": 0.12,
2930
+ "step": 3750
2931
+ },
2932
+ {
2933
+ "epoch": 4.06926406926407,
2934
+ "grad_norm": 1.3515625,
2935
+ "learning_rate": 9.513274336283186e-06,
2936
+ "loss": 0.1174,
2937
+ "step": 3760
2938
+ },
2939
+ {
2940
+ "epoch": 4.08008658008658,
2941
+ "grad_norm": 1.0859375,
2942
+ "learning_rate": 9.402654867256636e-06,
2943
+ "loss": 0.1096,
2944
+ "step": 3770
2945
+ },
2946
+ {
2947
+ "epoch": 4.090909090909091,
2948
+ "grad_norm": 1.546875,
2949
+ "learning_rate": 9.29203539823009e-06,
2950
+ "loss": 0.1109,
2951
+ "step": 3780
2952
+ },
2953
+ {
2954
+ "epoch": 4.1017316017316015,
2955
+ "grad_norm": 1.4453125,
2956
+ "learning_rate": 9.18141592920354e-06,
2957
+ "loss": 0.1263,
2958
+ "step": 3790
2959
+ },
2960
+ {
2961
+ "epoch": 4.112554112554113,
2962
+ "grad_norm": 1.8046875,
2963
+ "learning_rate": 9.070796460176992e-06,
2964
+ "loss": 0.1067,
2965
+ "step": 3800
2966
+ },
2967
+ {
2968
+ "epoch": 4.112554112554113,
2969
+ "eval_loss": 0.19082804024219513,
2970
+ "eval_runtime": 14.0192,
2971
+ "eval_samples_per_second": 35.665,
2972
+ "eval_steps_per_second": 0.285,
2973
+ "step": 3800
2974
+ },
2975
+ {
2976
+ "epoch": 4.123376623376624,
2977
+ "grad_norm": 1.4375,
2978
+ "learning_rate": 8.960176991150443e-06,
2979
+ "loss": 0.1079,
2980
+ "step": 3810
2981
+ },
2982
+ {
2983
+ "epoch": 4.134199134199134,
2984
+ "grad_norm": 1.796875,
2985
+ "learning_rate": 8.849557522123894e-06,
2986
+ "loss": 0.1139,
2987
+ "step": 3820
2988
+ },
2989
+ {
2990
+ "epoch": 4.145021645021645,
2991
+ "grad_norm": 1.1875,
2992
+ "learning_rate": 8.738938053097345e-06,
2993
+ "loss": 0.1107,
2994
+ "step": 3830
2995
+ },
2996
+ {
2997
+ "epoch": 4.1558441558441555,
2998
+ "grad_norm": 1.0859375,
2999
+ "learning_rate": 8.628318584070797e-06,
3000
+ "loss": 0.1227,
3001
+ "step": 3840
3002
+ },
3003
+ {
3004
+ "epoch": 4.166666666666667,
3005
+ "grad_norm": 1.6875,
3006
+ "learning_rate": 8.517699115044247e-06,
3007
+ "loss": 0.1258,
3008
+ "step": 3850
3009
+ },
3010
+ {
3011
+ "epoch": 4.177489177489178,
3012
+ "grad_norm": 1.734375,
3013
+ "learning_rate": 8.407079646017701e-06,
3014
+ "loss": 0.1186,
3015
+ "step": 3860
3016
+ },
3017
+ {
3018
+ "epoch": 4.188311688311688,
3019
+ "grad_norm": 1.40625,
3020
+ "learning_rate": 8.296460176991151e-06,
3021
+ "loss": 0.1181,
3022
+ "step": 3870
3023
+ },
3024
+ {
3025
+ "epoch": 4.199134199134199,
3026
+ "grad_norm": 2.171875,
3027
+ "learning_rate": 8.185840707964603e-06,
3028
+ "loss": 0.1302,
3029
+ "step": 3880
3030
+ },
3031
+ {
3032
+ "epoch": 4.20995670995671,
3033
+ "grad_norm": 1.2578125,
3034
+ "learning_rate": 8.075221238938053e-06,
3035
+ "loss": 0.1215,
3036
+ "step": 3890
3037
+ },
3038
+ {
3039
+ "epoch": 4.220779220779221,
3040
+ "grad_norm": 1.890625,
3041
+ "learning_rate": 7.964601769911505e-06,
3042
+ "loss": 0.1092,
3043
+ "step": 3900
3044
+ },
3045
+ {
3046
+ "epoch": 4.220779220779221,
3047
+ "eval_loss": 0.19130083918571472,
3048
+ "eval_runtime": 13.9613,
3049
+ "eval_samples_per_second": 35.813,
3050
+ "eval_steps_per_second": 0.287,
3051
+ "step": 3900
3052
+ },
3053
+ {
3054
+ "epoch": 4.231601731601732,
3055
+ "grad_norm": 1.4296875,
3056
+ "learning_rate": 7.853982300884957e-06,
3057
+ "loss": 0.1256,
3058
+ "step": 3910
3059
+ },
3060
+ {
3061
+ "epoch": 4.242424242424242,
3062
+ "grad_norm": 2.09375,
3063
+ "learning_rate": 7.743362831858407e-06,
3064
+ "loss": 0.1098,
3065
+ "step": 3920
3066
+ },
3067
+ {
3068
+ "epoch": 4.253246753246753,
3069
+ "grad_norm": 1.4921875,
3070
+ "learning_rate": 7.632743362831859e-06,
3071
+ "loss": 0.1216,
3072
+ "step": 3930
3073
+ },
3074
+ {
3075
+ "epoch": 4.264069264069264,
3076
+ "grad_norm": 0.74609375,
3077
+ "learning_rate": 7.52212389380531e-06,
3078
+ "loss": 0.1131,
3079
+ "step": 3940
3080
+ },
3081
+ {
3082
+ "epoch": 4.274891774891775,
3083
+ "grad_norm": 1.640625,
3084
+ "learning_rate": 7.411504424778762e-06,
3085
+ "loss": 0.1289,
3086
+ "step": 3950
3087
+ },
3088
+ {
3089
+ "epoch": 4.285714285714286,
3090
+ "grad_norm": 2.375,
3091
+ "learning_rate": 7.300884955752213e-06,
3092
+ "loss": 0.117,
3093
+ "step": 3960
3094
+ },
3095
+ {
3096
+ "epoch": 4.296536796536796,
3097
+ "grad_norm": 0.81640625,
3098
+ "learning_rate": 7.190265486725664e-06,
3099
+ "loss": 0.1107,
3100
+ "step": 3970
3101
+ },
3102
+ {
3103
+ "epoch": 4.307359307359308,
3104
+ "grad_norm": 1.2734375,
3105
+ "learning_rate": 7.079646017699115e-06,
3106
+ "loss": 0.1169,
3107
+ "step": 3980
3108
+ },
3109
+ {
3110
+ "epoch": 4.318181818181818,
3111
+ "grad_norm": 1.296875,
3112
+ "learning_rate": 6.969026548672566e-06,
3113
+ "loss": 0.1262,
3114
+ "step": 3990
3115
+ },
3116
+ {
3117
+ "epoch": 4.329004329004329,
3118
+ "grad_norm": 1.0546875,
3119
+ "learning_rate": 6.858407079646018e-06,
3120
+ "loss": 0.0927,
3121
+ "step": 4000
3122
+ },
3123
+ {
3124
+ "epoch": 4.329004329004329,
3125
+ "eval_loss": 0.19239334762096405,
3126
+ "eval_runtime": 14.0391,
3127
+ "eval_samples_per_second": 35.615,
3128
+ "eval_steps_per_second": 0.285,
3129
+ "step": 4000
3130
+ },
3131
+ {
3132
+ "epoch": 4.33982683982684,
3133
+ "grad_norm": 1.4375,
3134
+ "learning_rate": 6.747787610619469e-06,
3135
+ "loss": 0.1088,
3136
+ "step": 4010
3137
+ },
3138
+ {
3139
+ "epoch": 4.35064935064935,
3140
+ "grad_norm": 1.2265625,
3141
+ "learning_rate": 6.6371681415929215e-06,
3142
+ "loss": 0.1177,
3143
+ "step": 4020
3144
+ },
3145
+ {
3146
+ "epoch": 4.361471861471862,
3147
+ "grad_norm": 1.359375,
3148
+ "learning_rate": 6.5265486725663725e-06,
3149
+ "loss": 0.1173,
3150
+ "step": 4030
3151
+ },
3152
+ {
3153
+ "epoch": 4.372294372294372,
3154
+ "grad_norm": 1.4140625,
3155
+ "learning_rate": 6.415929203539823e-06,
3156
+ "loss": 0.1168,
3157
+ "step": 4040
3158
+ },
3159
+ {
3160
+ "epoch": 4.383116883116883,
3161
+ "grad_norm": 0.83203125,
3162
+ "learning_rate": 6.305309734513274e-06,
3163
+ "loss": 0.1145,
3164
+ "step": 4050
3165
+ },
3166
+ {
3167
+ "epoch": 4.393939393939394,
3168
+ "grad_norm": 1.2421875,
3169
+ "learning_rate": 6.194690265486726e-06,
3170
+ "loss": 0.1025,
3171
+ "step": 4060
3172
+ },
3173
+ {
3174
+ "epoch": 4.404761904761905,
3175
+ "grad_norm": 1.1640625,
3176
+ "learning_rate": 6.084070796460177e-06,
3177
+ "loss": 0.1016,
3178
+ "step": 4070
3179
+ },
3180
+ {
3181
+ "epoch": 4.415584415584416,
3182
+ "grad_norm": 1.2890625,
3183
+ "learning_rate": 5.973451327433629e-06,
3184
+ "loss": 0.1161,
3185
+ "step": 4080
3186
+ },
3187
+ {
3188
+ "epoch": 4.426406926406926,
3189
+ "grad_norm": 1.1484375,
3190
+ "learning_rate": 5.86283185840708e-06,
3191
+ "loss": 0.1159,
3192
+ "step": 4090
3193
+ },
3194
+ {
3195
+ "epoch": 4.437229437229437,
3196
+ "grad_norm": 0.98046875,
3197
+ "learning_rate": 5.752212389380531e-06,
3198
+ "loss": 0.1038,
3199
+ "step": 4100
3200
+ },
3201
+ {
3202
+ "epoch": 4.437229437229437,
3203
+ "eval_loss": 0.19425031542778015,
3204
+ "eval_runtime": 14.1296,
3205
+ "eval_samples_per_second": 35.387,
3206
+ "eval_steps_per_second": 0.283,
3207
+ "step": 4100
3208
+ },
3209
+ {
3210
+ "epoch": 4.448051948051948,
3211
+ "grad_norm": 2.078125,
3212
+ "learning_rate": 5.641592920353982e-06,
3213
+ "loss": 0.1281,
3214
+ "step": 4110
3215
+ },
3216
+ {
3217
+ "epoch": 4.458874458874459,
3218
+ "grad_norm": 1.4453125,
3219
+ "learning_rate": 5.5309734513274346e-06,
3220
+ "loss": 0.112,
3221
+ "step": 4120
3222
+ },
3223
+ {
3224
+ "epoch": 4.46969696969697,
3225
+ "grad_norm": 1.2578125,
3226
+ "learning_rate": 5.4203539823008855e-06,
3227
+ "loss": 0.1206,
3228
+ "step": 4130
3229
+ },
3230
+ {
3231
+ "epoch": 4.48051948051948,
3232
+ "grad_norm": 1.21875,
3233
+ "learning_rate": 5.3097345132743365e-06,
3234
+ "loss": 0.1147,
3235
+ "step": 4140
3236
+ },
3237
+ {
3238
+ "epoch": 4.491341991341991,
3239
+ "grad_norm": 1.3984375,
3240
+ "learning_rate": 5.1991150442477875e-06,
3241
+ "loss": 0.1248,
3242
+ "step": 4150
3243
+ },
3244
+ {
3245
+ "epoch": 4.5021645021645025,
3246
+ "grad_norm": 1.6171875,
3247
+ "learning_rate": 5.088495575221239e-06,
3248
+ "loss": 0.1165,
3249
+ "step": 4160
3250
+ },
3251
+ {
3252
+ "epoch": 4.512987012987013,
3253
+ "grad_norm": 3.703125,
3254
+ "learning_rate": 4.97787610619469e-06,
3255
+ "loss": 0.1067,
3256
+ "step": 4170
3257
+ },
3258
+ {
3259
+ "epoch": 4.523809523809524,
3260
+ "grad_norm": 1.9921875,
3261
+ "learning_rate": 4.867256637168142e-06,
3262
+ "loss": 0.1109,
3263
+ "step": 4180
3264
+ },
3265
+ {
3266
+ "epoch": 4.534632034632034,
3267
+ "grad_norm": 0.89453125,
3268
+ "learning_rate": 4.756637168141593e-06,
3269
+ "loss": 0.1026,
3270
+ "step": 4190
3271
+ },
3272
+ {
3273
+ "epoch": 4.545454545454545,
3274
+ "grad_norm": 1.90625,
3275
+ "learning_rate": 4.646017699115045e-06,
3276
+ "loss": 0.1231,
3277
+ "step": 4200
3278
+ },
3279
+ {
3280
+ "epoch": 4.545454545454545,
3281
+ "eval_loss": 0.1931437849998474,
3282
+ "eval_runtime": 14.0341,
3283
+ "eval_samples_per_second": 35.627,
3284
+ "eval_steps_per_second": 0.285,
3285
+ "step": 4200
3286
+ },
3287
+ {
3288
+ "epoch": 4.5562770562770565,
3289
+ "grad_norm": 0.98046875,
3290
+ "learning_rate": 4.535398230088496e-06,
3291
+ "loss": 0.1016,
3292
+ "step": 4210
3293
+ },
3294
+ {
3295
+ "epoch": 4.567099567099567,
3296
+ "grad_norm": 1.9375,
3297
+ "learning_rate": 4.424778761061947e-06,
3298
+ "loss": 0.115,
3299
+ "step": 4220
3300
+ },
3301
+ {
3302
+ "epoch": 4.577922077922078,
3303
+ "grad_norm": 1.0546875,
3304
+ "learning_rate": 4.314159292035399e-06,
3305
+ "loss": 0.122,
3306
+ "step": 4230
3307
+ },
3308
+ {
3309
+ "epoch": 4.588744588744589,
3310
+ "grad_norm": 1.578125,
3311
+ "learning_rate": 4.2035398230088504e-06,
3312
+ "loss": 0.1178,
3313
+ "step": 4240
3314
+ },
3315
+ {
3316
+ "epoch": 4.5995670995671,
3317
+ "grad_norm": 1.125,
3318
+ "learning_rate": 4.092920353982301e-06,
3319
+ "loss": 0.1179,
3320
+ "step": 4250
3321
+ },
3322
+ {
3323
+ "epoch": 4.6103896103896105,
3324
+ "grad_norm": 1.0859375,
3325
+ "learning_rate": 3.982300884955752e-06,
3326
+ "loss": 0.1109,
3327
+ "step": 4260
3328
+ },
3329
+ {
3330
+ "epoch": 4.621212121212121,
3331
+ "grad_norm": 1.0,
3332
+ "learning_rate": 3.871681415929203e-06,
3333
+ "loss": 0.1036,
3334
+ "step": 4270
3335
+ },
3336
+ {
3337
+ "epoch": 4.632034632034632,
3338
+ "grad_norm": 1.7734375,
3339
+ "learning_rate": 3.761061946902655e-06,
3340
+ "loss": 0.1102,
3341
+ "step": 4280
3342
+ },
3343
+ {
3344
+ "epoch": 4.642857142857143,
3345
+ "grad_norm": 1.4375,
3346
+ "learning_rate": 3.6504424778761066e-06,
3347
+ "loss": 0.1154,
3348
+ "step": 4290
3349
+ },
3350
+ {
3351
+ "epoch": 4.653679653679654,
3352
+ "grad_norm": 1.6328125,
3353
+ "learning_rate": 3.5398230088495575e-06,
3354
+ "loss": 0.1277,
3355
+ "step": 4300
3356
+ },
3357
+ {
3358
+ "epoch": 4.653679653679654,
3359
+ "eval_loss": 0.19161994755268097,
3360
+ "eval_runtime": 14.064,
3361
+ "eval_samples_per_second": 35.552,
3362
+ "eval_steps_per_second": 0.284,
3363
+ "step": 4300
3364
+ },
3365
+ {
3366
+ "epoch": 4.6645021645021645,
3367
+ "grad_norm": 1.578125,
3368
+ "learning_rate": 3.429203539823009e-06,
3369
+ "loss": 0.1143,
3370
+ "step": 4310
3371
+ },
3372
+ {
3373
+ "epoch": 4.675324675324675,
3374
+ "grad_norm": 4.0625,
3375
+ "learning_rate": 3.3185840707964607e-06,
3376
+ "loss": 0.1256,
3377
+ "step": 4320
3378
+ },
3379
+ {
3380
+ "epoch": 4.686147186147187,
3381
+ "grad_norm": 1.5625,
3382
+ "learning_rate": 3.2079646017699117e-06,
3383
+ "loss": 0.1275,
3384
+ "step": 4330
3385
+ },
3386
+ {
3387
+ "epoch": 4.696969696969697,
3388
+ "grad_norm": 1.4453125,
3389
+ "learning_rate": 3.097345132743363e-06,
3390
+ "loss": 0.1211,
3391
+ "step": 4340
3392
+ },
3393
+ {
3394
+ "epoch": 4.707792207792208,
3395
+ "grad_norm": 1.4296875,
3396
+ "learning_rate": 2.9867256637168145e-06,
3397
+ "loss": 0.1093,
3398
+ "step": 4350
3399
+ },
3400
+ {
3401
+ "epoch": 4.7186147186147185,
3402
+ "grad_norm": 1.7109375,
3403
+ "learning_rate": 2.8761061946902655e-06,
3404
+ "loss": 0.114,
3405
+ "step": 4360
3406
+ },
3407
+ {
3408
+ "epoch": 4.729437229437229,
3409
+ "grad_norm": 1.4453125,
3410
+ "learning_rate": 2.7654867256637173e-06,
3411
+ "loss": 0.1224,
3412
+ "step": 4370
3413
+ },
3414
+ {
3415
+ "epoch": 4.740259740259741,
3416
+ "grad_norm": 1.609375,
3417
+ "learning_rate": 2.6548672566371683e-06,
3418
+ "loss": 0.1259,
3419
+ "step": 4380
3420
+ },
3421
+ {
3422
+ "epoch": 4.751082251082251,
3423
+ "grad_norm": 1.3984375,
3424
+ "learning_rate": 2.5442477876106196e-06,
3425
+ "loss": 0.1126,
3426
+ "step": 4390
3427
+ },
3428
+ {
3429
+ "epoch": 4.761904761904762,
3430
+ "grad_norm": 1.265625,
3431
+ "learning_rate": 2.433628318584071e-06,
3432
+ "loss": 0.1023,
3433
+ "step": 4400
3434
+ },
3435
+ {
3436
+ "epoch": 4.761904761904762,
3437
+ "eval_loss": 0.1923452615737915,
3438
+ "eval_runtime": 19.6377,
3439
+ "eval_samples_per_second": 25.461,
3440
+ "eval_steps_per_second": 0.204,
3441
+ "step": 4400
3442
+ },
3443
+ {
3444
+ "epoch": 4.7727272727272725,
3445
+ "grad_norm": 1.8671875,
3446
+ "learning_rate": 2.3230088495575224e-06,
3447
+ "loss": 0.1191,
3448
+ "step": 4410
3449
+ },
3450
+ {
3451
+ "epoch": 4.783549783549784,
3452
+ "grad_norm": 1.5859375,
3453
+ "learning_rate": 2.2123893805309734e-06,
3454
+ "loss": 0.113,
3455
+ "step": 4420
3456
+ },
3457
+ {
3458
+ "epoch": 4.794372294372295,
3459
+ "grad_norm": 0.8828125,
3460
+ "learning_rate": 2.1017699115044252e-06,
3461
+ "loss": 0.1147,
3462
+ "step": 4430
3463
+ },
3464
+ {
3465
+ "epoch": 4.805194805194805,
3466
+ "grad_norm": 1.5703125,
3467
+ "learning_rate": 1.991150442477876e-06,
3468
+ "loss": 0.1159,
3469
+ "step": 4440
3470
+ },
3471
+ {
3472
+ "epoch": 4.816017316017316,
3473
+ "grad_norm": 1.203125,
3474
+ "learning_rate": 1.8805309734513276e-06,
3475
+ "loss": 0.1171,
3476
+ "step": 4450
3477
+ },
3478
+ {
3479
+ "epoch": 4.8268398268398265,
3480
+ "grad_norm": 1.3984375,
3481
+ "learning_rate": 1.7699115044247788e-06,
3482
+ "loss": 0.1183,
3483
+ "step": 4460
3484
+ },
3485
+ {
3486
+ "epoch": 4.837662337662338,
3487
+ "grad_norm": 1.359375,
3488
+ "learning_rate": 1.6592920353982304e-06,
3489
+ "loss": 0.1101,
3490
+ "step": 4470
3491
+ },
3492
+ {
3493
+ "epoch": 4.848484848484849,
3494
+ "grad_norm": 0.87890625,
3495
+ "learning_rate": 1.5486725663716816e-06,
3496
+ "loss": 0.1138,
3497
+ "step": 4480
3498
+ },
3499
+ {
3500
+ "epoch": 4.859307359307359,
3501
+ "grad_norm": 1.390625,
3502
+ "learning_rate": 1.4380530973451327e-06,
3503
+ "loss": 0.1277,
3504
+ "step": 4490
3505
+ },
3506
+ {
3507
+ "epoch": 4.87012987012987,
3508
+ "grad_norm": 1.5703125,
3509
+ "learning_rate": 1.3274336283185841e-06,
3510
+ "loss": 0.113,
3511
+ "step": 4500
3512
+ },
3513
+ {
3514
+ "epoch": 4.87012987012987,
3515
+ "eval_loss": 0.1925743669271469,
3516
+ "eval_runtime": 14.1551,
3517
+ "eval_samples_per_second": 35.323,
3518
+ "eval_steps_per_second": 0.283,
3519
+ "step": 4500
3520
+ },
3521
+ {
3522
+ "epoch": 4.880952380952381,
3523
+ "grad_norm": 1.6328125,
3524
+ "learning_rate": 1.2168141592920355e-06,
3525
+ "loss": 0.1012,
3526
+ "step": 4510
3527
+ },
3528
+ {
3529
+ "epoch": 4.891774891774892,
3530
+ "grad_norm": 1.0625,
3531
+ "learning_rate": 1.1061946902654867e-06,
3532
+ "loss": 0.109,
3533
+ "step": 4520
3534
+ },
3535
+ {
3536
+ "epoch": 4.902597402597403,
3537
+ "grad_norm": 0.86328125,
3538
+ "learning_rate": 9.95575221238938e-07,
3539
+ "loss": 0.1191,
3540
+ "step": 4530
3541
+ },
3542
+ {
3543
+ "epoch": 4.913419913419913,
3544
+ "grad_norm": 1.390625,
3545
+ "learning_rate": 8.849557522123894e-07,
3546
+ "loss": 0.1144,
3547
+ "step": 4540
3548
+ },
3549
+ {
3550
+ "epoch": 4.924242424242424,
3551
+ "grad_norm": 0.94140625,
3552
+ "learning_rate": 7.743362831858408e-07,
3553
+ "loss": 0.1167,
3554
+ "step": 4550
3555
+ },
3556
+ {
3557
+ "epoch": 4.935064935064935,
3558
+ "grad_norm": 1.3046875,
3559
+ "learning_rate": 6.637168141592921e-07,
3560
+ "loss": 0.1131,
3561
+ "step": 4560
3562
+ },
3563
+ {
3564
+ "epoch": 4.945887445887446,
3565
+ "grad_norm": 1.0234375,
3566
+ "learning_rate": 5.530973451327434e-07,
3567
+ "loss": 0.1185,
3568
+ "step": 4570
3569
+ },
3570
+ {
3571
+ "epoch": 4.956709956709957,
3572
+ "grad_norm": 1.140625,
3573
+ "learning_rate": 4.424778761061947e-07,
3574
+ "loss": 0.1302,
3575
+ "step": 4580
3576
+ },
3577
+ {
3578
+ "epoch": 4.967532467532467,
3579
+ "grad_norm": 1.875,
3580
+ "learning_rate": 3.3185840707964603e-07,
3581
+ "loss": 0.1145,
3582
+ "step": 4590
3583
+ },
3584
+ {
3585
+ "epoch": 4.978354978354979,
3586
+ "grad_norm": 1.3125,
3587
+ "learning_rate": 2.2123893805309735e-07,
3588
+ "loss": 0.1136,
3589
+ "step": 4600
3590
+ },
3591
+ {
3592
+ "epoch": 4.978354978354979,
3593
+ "eval_loss": 0.19142386317253113,
3594
+ "eval_runtime": 13.9692,
3595
+ "eval_samples_per_second": 35.793,
3596
+ "eval_steps_per_second": 0.286,
3597
+ "step": 4600
3598
+ },
3599
+ {
3600
+ "epoch": 4.989177489177489,
3601
+ "grad_norm": 1.1015625,
3602
+ "learning_rate": 1.1061946902654867e-07,
3603
+ "loss": 0.1204,
3604
+ "step": 4610
3605
+ },
3606
+ {
3607
+ "epoch": 5.0,
3608
+ "grad_norm": 1.625,
3609
+ "learning_rate": 0.0,
3610
+ "loss": 0.11,
3611
+ "step": 4620
3612
+ }
3613
+ ],
3614
+ "logging_steps": 10,
3615
+ "max_steps": 4620,
3616
+ "num_input_tokens_seen": 0,
3617
+ "num_train_epochs": 5,
3618
+ "save_steps": 500,
3619
+ "stateful_callbacks": {
3620
+ "TrainerControl": {
3621
+ "args": {
3622
+ "should_epoch_stop": false,
3623
+ "should_evaluate": false,
3624
+ "should_log": false,
3625
+ "should_save": true,
3626
+ "should_training_stop": true
3627
+ },
3628
+ "attributes": {}
3629
+ }
3630
+ },
3631
+ "total_flos": 9.062004698834473e+18,
3632
+ "train_batch_size": 128,
3633
+ "trial_name": null,
3634
+ "trial_params": null
3635
+ }
checkpoint-4620/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b8021a02e01ea2353e15a4ebae50ce1d0ee1f68f5be152393922bdf6c1a0c9a
3
+ size 5240
git_hash.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ ece493abc30f10357cdce4b16cb2e6a5dc8cf735
preprocessor_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "SiglipImageProcessor",
12
+ "image_seq_length": 1024,
13
+ "image_std": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "processor_class": "ColPaliProcessor",
19
+ "resample": 3,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 448,
23
+ "width": 448
24
+ }
25
+ }
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"validation_set": {"ndcg_at_1": 0.79, "ndcg_at_3": 0.84843, "ndcg_at_5": 0.85669, "ndcg_at_10": 0.86202, "ndcg_at_20": 0.8677, "ndcg_at_50": 0.87297, "ndcg_at_100": 0.87529, "map_at_1": 0.79, "map_at_3": 0.83467, "map_at_5": 0.83927, "map_at_10": 0.84156, "map_at_20": 0.84317, "map_at_50": 0.84407, "map_at_100": 0.84429, "recall_at_1": 0.79, "recall_at_3": 0.888, "recall_at_5": 0.908, "recall_at_10": 0.924, "recall_at_20": 0.946, "recall_at_50": 0.972, "recall_at_100": 0.986, "precision_at_1": 0.79, "precision_at_3": 0.296, "precision_at_5": 0.1816, "precision_at_10": 0.0924, "precision_at_20": 0.0473, "precision_at_50": 0.01944, "precision_at_100": 0.00986, "mrr_at_1": 0.794, "mrr_at_3": 0.8353333333333334, "mrr_at_5": 0.8422333333333334, "mrr_at_10": 0.8441301587301587, "mrr_at_20": 0.8458793785419947, "mrr_at_50": 0.8465229562956768, "mrr_at_100": 0.8467717729113465, "naucs_at_1_max": 0.33967826484237773, "naucs_at_1_std": -0.582754226080265, "naucs_at_1_diff1": 0.9353692259600352, "naucs_at_3_max": 0.4177489177489186, "naucs_at_3_std": -0.6576873647186127, "naucs_at_3_diff1": 0.9209111201298699, "naucs_at_5_max": 0.3537632444282065, "naucs_at_5_std": -0.8584622254698979, "naucs_at_5_diff1": 0.9243494499248974, "naucs_at_10_max": 0.32978279030910307, "naucs_at_10_std": -0.8843063541206009, "naucs_at_10_diff1": 0.9574303405572758, "naucs_at_20_max": 0.24876370301207096, "naucs_at_20_std": -1.1399522772071786, "naucs_at_20_diff1": 0.9806342290002422, "naucs_at_50_max": 0.337434973989592, "naucs_at_50_std": -1.3481726023742884, "naucs_at_50_diff1": 0.9813258636788064, "naucs_at_100_max": 0.3099239695878289, "naucs_at_100_std": -1.2645058023209355, "naucs_at_100_diff1": 1.0}, "syntheticDocQA_energy": {"ndcg_at_1": 0.87, "ndcg_at_3": 0.93047, "ndcg_at_5": 0.93478, "ndcg_at_10": 0.93834, "ndcg_at_20": 0.93834, "ndcg_at_50": 0.94059, "ndcg_at_100": 0.94059, "map_at_1": 0.87, "map_at_3": 0.91667, "map_at_5": 0.91917, "map_at_10": 0.92083, "map_at_20": 0.92083, "map_at_50": 0.92131, "map_at_100": 0.92131, "recall_at_1": 0.87, "recall_at_3": 0.97, "recall_at_5": 0.98, "recall_at_10": 0.99, "recall_at_20": 0.99, "recall_at_50": 1.0, "recall_at_100": 1.0, "precision_at_1": 0.87, "precision_at_3": 0.32333, "precision_at_5": 0.196, "precision_at_10": 0.099, "precision_at_20": 0.0495, "precision_at_50": 0.02, "precision_at_100": 0.01, "mrr_at_1": 0.89, "mrr_at_3": 0.93, "mrr_at_5": 0.932, "mrr_at_10": 0.932, "mrr_at_20": 0.9325, "mrr_at_50": 0.9325, "mrr_at_100": 0.9325, "naucs_at_1_max": 0.1931052358735098, "naucs_at_1_std": -0.27316151966229746, "naucs_at_1_diff1": 0.975005554321262, "naucs_at_3_max": 0.7424525365701778, "naucs_at_3_std": -0.45238095238094883, "naucs_at_3_diff1": 1.0, "naucs_at_5_max": 0.9346405228758136, "naucs_at_5_std": 0.1914098972922579, "naucs_at_5_diff1": 1.0, "naucs_at_10_max": 1.0, "naucs_at_10_std": 0.5541549953314738, "naucs_at_10_diff1": 1.0, "naucs_at_20_max": 1.0, "naucs_at_20_std": 0.5541549953314738, "naucs_at_20_diff1": 1.0, "naucs_at_50_max": NaN, "naucs_at_50_std": NaN, "naucs_at_50_diff1": NaN, "naucs_at_100_max": NaN, "naucs_at_100_std": NaN, "naucs_at_100_diff1": NaN}, "syntheticDocQA_healthcare_industry": {"ndcg_at_1": 0.92, "ndcg_at_3": 0.95655, "ndcg_at_5": 0.96516, "ndcg_at_10": 0.96516, "ndcg_at_20": 0.96516, "ndcg_at_50": 0.96516, "ndcg_at_100": 0.96516, "map_at_1": 0.92, "map_at_3": 0.94833, "map_at_5": 0.95333, "map_at_10": 0.95333, "map_at_20": 0.95333, "map_at_50": 0.95333, "map_at_100": 0.95333, "recall_at_1": 0.92, "recall_at_3": 0.98, "recall_at_5": 1.0, "recall_at_10": 1.0, "recall_at_20": 1.0, "recall_at_50": 1.0, "recall_at_100": 1.0, "precision_at_1": 0.92, "precision_at_3": 0.32667, "precision_at_5": 0.2, "precision_at_10": 0.1, "precision_at_20": 0.05, "precision_at_50": 0.02, "precision_at_100": 0.01, "mrr_at_1": 0.92, "mrr_at_3": 0.9483333333333333, "mrr_at_5": 0.9533333333333333, "mrr_at_10": 0.9533333333333333, "mrr_at_20": 0.9533333333333333, "mrr_at_50": 0.9533333333333333, "mrr_at_100": 0.9533333333333333, "naucs_at_1_max": 0.5192577030812326, "naucs_at_1_std": -0.6149626517273588, "naucs_at_1_diff1": 0.8768674136321195, "naucs_at_3_max": -0.07586367880486825, "naucs_at_3_std": -0.6909430438842241, "naucs_at_3_diff1": 0.7770774976657261, "naucs_at_5_max": 1.0, "naucs_at_5_std": 1.0, "naucs_at_5_diff1": 1.0, "naucs_at_10_max": 1.0, "naucs_at_10_std": 1.0, "naucs_at_10_diff1": 1.0, "naucs_at_20_max": 1.0, "naucs_at_20_std": 1.0, "naucs_at_20_diff1": 1.0, "naucs_at_50_max": NaN, "naucs_at_50_std": NaN, "naucs_at_50_diff1": NaN, "naucs_at_100_max": NaN, "naucs_at_100_std": NaN, "naucs_at_100_diff1": NaN}, "syntheticDocQA_artificial_intelligence_test": {"ndcg_at_1": 0.9, "ndcg_at_3": 0.95917, "ndcg_at_5": 0.95917, "ndcg_at_10": 0.95917, "ndcg_at_20": 0.95917, "ndcg_at_50": 0.95917, "ndcg_at_100": 0.95917, "map_at_1": 0.9, "map_at_3": 0.945, "map_at_5": 0.945, "map_at_10": 0.945, "map_at_20": 0.945, "map_at_50": 0.945, "map_at_100": 0.945, "recall_at_1": 0.9, "recall_at_3": 1.0, "recall_at_5": 1.0, "recall_at_10": 1.0, "recall_at_20": 1.0, "recall_at_50": 1.0, "recall_at_100": 1.0, "precision_at_1": 0.9, "precision_at_3": 0.33333, "precision_at_5": 0.2, "precision_at_10": 0.1, "precision_at_20": 0.05, "precision_at_50": 0.02, "precision_at_100": 0.01, "mrr_at_1": 0.9, "mrr_at_3": 0.9483333333333333, "mrr_at_5": 0.9483333333333333, "mrr_at_10": 0.9483333333333333, "mrr_at_20": 0.9483333333333333, "mrr_at_50": 0.9483333333333333, "mrr_at_100": 0.9483333333333333, "naucs_at_1_max": 0.3045751633986925, "naucs_at_1_std": -0.11125116713351993, "naucs_at_1_diff1": 0.9444444444444449, "naucs_at_3_max": 1.0, "naucs_at_3_std": 1.0, "naucs_at_3_diff1": 1.0, "naucs_at_5_max": 1.0, "naucs_at_5_std": 1.0, "naucs_at_5_diff1": 1.0, "naucs_at_10_max": 1.0, "naucs_at_10_std": 1.0, "naucs_at_10_diff1": 1.0, "naucs_at_20_max": 1.0, "naucs_at_20_std": 1.0, "naucs_at_20_diff1": 1.0, "naucs_at_50_max": NaN, "naucs_at_50_std": NaN, "naucs_at_50_diff1": NaN, "naucs_at_100_max": NaN, "naucs_at_100_std": NaN, "naucs_at_100_diff1": NaN}, "syntheticDocQA_government_reports": {"ndcg_at_1": 0.81, "ndcg_at_3": 0.90571, "ndcg_at_5": 0.91389, "ndcg_at_10": 0.91389, "ndcg_at_20": 0.91668, "ndcg_at_50": 0.91668, "ndcg_at_100": 0.91668, "map_at_1": 0.81, "map_at_3": 0.88333, "map_at_5": 0.88783, "map_at_10": 0.88783, "map_at_20": 0.88874, "map_at_50": 0.88874, "map_at_100": 0.88874, "recall_at_1": 0.81, "recall_at_3": 0.97, "recall_at_5": 0.99, "recall_at_10": 0.99, "recall_at_20": 1.0, "recall_at_50": 1.0, "recall_at_100": 1.0, "precision_at_1": 0.81, "precision_at_3": 0.32333, "precision_at_5": 0.198, "precision_at_10": 0.099, "precision_at_20": 0.05, "precision_at_50": 0.02, "precision_at_100": 0.01, "mrr_at_1": 0.83, "mrr_at_3": 0.8983333333333333, "mrr_at_5": 0.9003333333333333, "mrr_at_10": 0.9013333333333334, "mrr_at_20": 0.9013333333333334, "mrr_at_50": 0.9013333333333334, "mrr_at_100": 0.9013333333333334, "naucs_at_1_max": 0.44109172822044124, "naucs_at_1_std": -0.10191733459060262, "naucs_at_1_diff1": 0.9325527790874335, "naucs_at_3_max": 0.5720510426392755, "naucs_at_3_std": -0.2759103641456547, "naucs_at_3_diff1": 0.8638344226579548, "naucs_at_5_max": 1.0, "naucs_at_5_std": 0.5541549953314738, "naucs_at_5_diff1": 0.7222222222222276, "naucs_at_10_max": 1.0, "naucs_at_10_std": 0.5541549953314738, "naucs_at_10_diff1": 0.7222222222222276, "naucs_at_20_max": 1.0, "naucs_at_20_std": 1.0, "naucs_at_20_diff1": 1.0, "naucs_at_50_max": NaN, "naucs_at_50_std": NaN, "naucs_at_50_diff1": NaN, "naucs_at_100_max": NaN, "naucs_at_100_std": NaN, "naucs_at_100_diff1": NaN}, "infovqa_subsampled": {"ndcg_at_1": 0.788, "ndcg_at_3": 0.83459, "ndcg_at_5": 0.8444, "ndcg_at_10": 0.85477, "ndcg_at_20": 0.86156, "ndcg_at_50": 0.86512, "ndcg_at_100": 0.86701, "map_at_1": 0.788, "map_at_3": 0.82367, "map_at_5": 0.82907, "map_at_10": 0.83335, "map_at_20": 0.83533, "map_at_50": 0.83589, "map_at_100": 0.83604, "recall_at_1": 0.788, "recall_at_3": 0.866, "recall_at_5": 0.89, "recall_at_10": 0.922, "recall_at_20": 0.948, "recall_at_50": 0.966, "recall_at_100": 0.978, "precision_at_1": 0.788, "precision_at_3": 0.28867, "precision_at_5": 0.178, "precision_at_10": 0.0922, "precision_at_20": 0.0474, "precision_at_50": 0.01932, "precision_at_100": 0.00978, "mrr_at_1": 0.79, "mrr_at_3": 0.8243333333333334, "mrr_at_5": 0.8296333333333333, "mrr_at_10": 0.8337198412698412, "mrr_at_20": 0.835826794404426, "mrr_at_50": 0.8363718978560812, "mrr_at_100": 0.8364841255520045, "naucs_at_1_max": 0.564645172377579, "naucs_at_1_std": 0.07679418756742774, "naucs_at_1_diff1": 0.9027840100820121, "naucs_at_3_max": 0.6547775199273532, "naucs_at_3_std": 0.20628594490969693, "naucs_at_3_diff1": 0.8634273606526658, "naucs_at_5_max": 0.6722876041577192, "naucs_at_5_std": 0.249360020616785, "naucs_at_5_diff1": 0.8348939094579507, "naucs_at_10_max": 0.8402044578515178, "naucs_at_10_std": 0.488951136009961, "naucs_at_10_diff1": 0.8440230793171982, "naucs_at_20_max": 0.8576276664511968, "naucs_at_20_std": 0.5467751203045319, "naucs_at_20_diff1": 0.8361524096818206, "naucs_at_50_max": 0.9004503762289214, "naucs_at_50_std": 0.7550667325753828, "naucs_at_50_diff1": 0.7987037952435864, "naucs_at_100_max": 0.8866819455054707, "naucs_at_100_std": 0.6872506578388877, "naucs_at_100_diff1": 0.8189882013411434}, "docvqa_subsampled": {"ndcg_at_1": 0.442, "ndcg_at_3": 0.50843, "ndcg_at_5": 0.52685, "ndcg_at_10": 0.54636, "ndcg_at_20": 0.55977, "ndcg_at_50": 0.57411, "ndcg_at_100": 0.58388, "map_at_1": 0.442, "map_at_3": 0.492, "map_at_5": 0.5024, "map_at_10": 0.51014, "map_at_20": 0.51396, "map_at_50": 0.51629, "map_at_100": 0.51716, "recall_at_1": 0.442, "recall_at_3": 0.556, "recall_at_5": 0.6, "recall_at_10": 0.662, "recall_at_20": 0.714, "recall_at_50": 0.786, "recall_at_100": 0.846, "precision_at_1": 0.442, "precision_at_3": 0.18533, "precision_at_5": 0.12, "precision_at_10": 0.0662, "precision_at_20": 0.0357, "precision_at_50": 0.01572, "precision_at_100": 0.00846, "mrr_at_1": 0.44, "mrr_at_3": 0.492, "mrr_at_5": 0.5027, "mrr_at_10": 0.5095595238095239, "mrr_at_20": 0.5132770588458669, "mrr_at_50": 0.5156818249348138, "mrr_at_100": 0.5163464533728304, "naucs_at_1_max": 0.3535234553133254, "naucs_at_1_std": 0.6565424405226743, "naucs_at_1_diff1": 0.8737923667391878, "naucs_at_3_max": 0.25263303691056965, "naucs_at_3_std": 0.7955022132774029, "naucs_at_3_diff1": 0.7975232775726874, "naucs_at_5_max": 0.2028851291184328, "naucs_at_5_std": 0.8435470466013651, "naucs_at_5_diff1": 0.7929504303947756, "naucs_at_10_max": 0.1453567608454435, "naucs_at_10_std": 0.881687287364671, "naucs_at_10_diff1": 0.7690826149688607, "naucs_at_20_max": 0.04475463124388333, "naucs_at_20_std": 0.8884602262289066, "naucs_at_20_diff1": 0.7420375598471808, "naucs_at_50_max": -0.04904539186557294, "naucs_at_50_std": 0.8852432604049678, "naucs_at_50_diff1": 0.7128641087243929, "naucs_at_100_max": -0.023383786394195364, "naucs_at_100_std": 0.870448739944336, "naucs_at_100_diff1": 0.6722266702250679}, "arxivqa_subsampled": {"ndcg_at_1": 0.764, "ndcg_at_3": 0.81433, "ndcg_at_5": 0.82896, "ndcg_at_10": 0.84226, "ndcg_at_20": 0.85147, "ndcg_at_50": 0.85508, "ndcg_at_100": 0.85638, "map_at_1": 0.764, "map_at_3": 0.802, "map_at_5": 0.81, "map_at_10": 0.8157, "map_at_20": 0.8183, "map_at_50": 0.8189, "map_at_100": 0.81901, "recall_at_1": 0.764, "recall_at_3": 0.85, "recall_at_5": 0.886, "recall_at_10": 0.926, "recall_at_20": 0.962, "recall_at_50": 0.98, "recall_at_100": 0.988, "precision_at_1": 0.764, "precision_at_3": 0.28333, "precision_at_5": 0.1772, "precision_at_10": 0.0926, "precision_at_20": 0.0481, "precision_at_50": 0.0196, "precision_at_100": 0.00988, "mrr_at_1": 0.77, "mrr_at_3": 0.8053333333333333, "mrr_at_5": 0.8113333333333334, "mrr_at_10": 0.8185563492063492, "mrr_at_20": 0.820747419771491, "mrr_at_50": 0.8213430501215044, "mrr_at_100": 0.821454277542236, "naucs_at_1_max": 0.671075133964199, "naucs_at_1_std": 0.11374898046885278, "naucs_at_1_diff1": 0.9173193127702304, "naucs_at_3_max": 0.6937845753335508, "naucs_at_3_std": 0.08132769280833158, "naucs_at_3_diff1": 0.8599088838268806, "naucs_at_5_max": 0.7187390746999195, "naucs_at_5_std": 0.10096225881099288, "naucs_at_5_diff1": 0.8398038856610107, "naucs_at_10_max": 0.6757892346127655, "naucs_at_10_std": 0.07752290105231209, "naucs_at_10_diff1": 0.834961011431599, "naucs_at_20_max": 0.866823922551477, "naucs_at_20_std": 0.4288417121234476, "naucs_at_20_diff1": 0.9049093321539146, "naucs_at_50_max": 0.9161998132586351, "naucs_at_50_std": 0.5491129785247385, "naucs_at_50_diff1": 0.8846872082166126, "naucs_at_100_max": 0.9346405228758138, "naucs_at_100_std": 0.7751322751322711, "naucs_at_100_diff1": 0.9256924992219123}, "tabfquad_subsampled": {"ndcg_at_1": 0.77857, "ndcg_at_3": 0.85229, "ndcg_at_5": 0.86705, "ndcg_at_10": 0.87412, "ndcg_at_20": 0.88117, "ndcg_at_50": 0.88249, "ndcg_at_100": 0.8831, "map_at_1": 0.77857, "map_at_3": 0.83452, "map_at_5": 0.84274, "map_at_10": 0.84574, "map_at_20": 0.84758, "map_at_50": 0.84775, "map_at_100": 0.84782, "recall_at_1": 0.77857, "recall_at_3": 0.90357, "recall_at_5": 0.93929, "recall_at_10": 0.96071, "recall_at_20": 0.98929, "recall_at_50": 0.99643, "recall_at_100": 1.0, "precision_at_1": 0.77857, "precision_at_3": 0.30119, "precision_at_5": 0.18786, "precision_at_10": 0.09607, "precision_at_20": 0.04946, "precision_at_50": 0.01993, "precision_at_100": 0.01, "mrr_at_1": 0.7785714285714286, "mrr_at_3": 0.8351190476190476, "mrr_at_5": 0.8429761904761904, "mrr_at_10": 0.8459807256235828, "mrr_at_20": 0.8478239927887484, "mrr_at_50": 0.8479991465823232, "mrr_at_100": 0.8480652841484608, "naucs_at_1_max": 0.5082211641432032, "naucs_at_1_std": 0.26902482824521823, "naucs_at_1_diff1": 0.8378750185346884, "naucs_at_3_max": 0.6534910260400478, "naucs_at_3_std": 0.5397344122834327, "naucs_at_3_diff1": 0.8195698032299361, "naucs_at_5_max": 0.6372274399956075, "naucs_at_5_std": 0.5036249794035255, "naucs_at_5_diff1": 0.8168286922612179, "naucs_at_10_max": 0.5800016976487535, "naucs_at_10_std": 0.4681266445972378, "naucs_at_10_diff1": 0.8242933537051196, "naucs_at_20_max": 0.478835978836005, "naucs_at_20_std": 0.08667911609088395, "naucs_at_20_diff1": 0.807812013694365, "naucs_at_50_max": 1.0, "naucs_at_50_std": 1.0, "naucs_at_50_diff1": 0.8692810457515607, "naucs_at_100_max": 1.0, "naucs_at_100_std": 1.0, "naucs_at_100_diff1": 1.0}, "tatdqa": {"ndcg_at_1": 0.54841, "ndcg_at_3": 0.6562, "ndcg_at_5": 0.68326, "ndcg_at_10": 0.70893, "ndcg_at_20": 0.72222, "ndcg_at_50": 0.7284, "ndcg_at_100": 0.73201, "map_at_1": 0.54841, "map_at_3": 0.63039, "map_at_5": 0.6453, "map_at_10": 0.65601, "map_at_20": 0.65978, "map_at_50": 0.6608, "map_at_100": 0.66112, "recall_at_1": 0.54841, "recall_at_3": 0.73061, "recall_at_5": 0.79675, "recall_at_10": 0.87553, "recall_at_20": 0.92724, "recall_at_50": 0.95791, "recall_at_100": 0.98016, "precision_at_1": 0.54841, "precision_at_3": 0.24354, "precision_at_5": 0.15935, "precision_at_10": 0.08755, "precision_at_20": 0.04636, "precision_at_50": 0.01916, "precision_at_100": 0.0098, "mrr_at_1": 0.5453998797354179, "mrr_at_3": 0.6282822208859491, "mrr_at_5": 0.6424133092804168, "mrr_at_10": 0.6538699424448072, "mrr_at_20": 0.657559818963894, "mrr_at_50": 0.6584837479678501, "mrr_at_100": 0.6587944596871749, "naucs_at_1_max": 0.3090653897821844, "naucs_at_1_std": 0.04930923898225459, "naucs_at_1_diff1": 0.7043054798202365, "naucs_at_3_max": 0.3130488042819241, "naucs_at_3_std": 0.10722995191204956, "naucs_at_3_diff1": 0.6136469206170786, "naucs_at_5_max": 0.3106346856808471, "naucs_at_5_std": 0.14652818200482404, "naucs_at_5_diff1": 0.5986106446949901, "naucs_at_10_max": 0.40632427830908485, "naucs_at_10_std": 0.29243794470603673, "naucs_at_10_diff1": 0.619230995521158, "naucs_at_20_max": 0.5480857947858365, "naucs_at_20_std": 0.47385204289648464, "naucs_at_20_diff1": 0.5875153603254067, "naucs_at_50_max": 0.6165541741820008, "naucs_at_50_std": 0.6292040856038491, "naucs_at_50_diff1": 0.596167522127439, "naucs_at_100_max": 0.6907473482145399, "naucs_at_100_std": 0.7358520747031099, "naucs_at_100_diff1": 0.6689903536377996}, "shift_project": {"ndcg_at_1": 0.53, "ndcg_at_3": 0.67357, "ndcg_at_5": 0.7024, "ndcg_at_10": 0.72124, "ndcg_at_20": 0.73433, "ndcg_at_50": 0.74196, "ndcg_at_100": 0.74196, "map_at_1": 0.53, "map_at_3": 0.64, "map_at_5": 0.656, "map_at_10": 0.66346, "map_at_20": 0.66728, "map_at_50": 0.66837, "map_at_100": 0.66837, "recall_at_1": 0.53, "recall_at_3": 0.77, "recall_at_5": 0.84, "recall_at_10": 0.9, "recall_at_20": 0.95, "recall_at_50": 0.99, "recall_at_100": 0.99, "precision_at_1": 0.53, "precision_at_3": 0.25667, "precision_at_5": 0.168, "precision_at_10": 0.09, "precision_at_20": 0.0475, "precision_at_50": 0.0198, "precision_at_100": 0.0099, "mrr_at_1": 0.54, "mrr_at_3": 0.6516666666666667, "mrr_at_5": 0.6651666666666667, "mrr_at_10": 0.6737341269841269, "mrr_at_20": 0.6768915945165945, "mrr_at_50": 0.6779969181064383, "mrr_at_100": 0.6779969181064383, "naucs_at_1_max": 0.04138888507058511, "naucs_at_1_std": -0.2798938817028416, "naucs_at_1_diff1": 0.7153362932823818, "naucs_at_3_max": 0.057585998522862565, "naucs_at_3_std": -0.34171124191490737, "naucs_at_3_diff1": 0.6187193661735401, "naucs_at_5_max": 0.3037248404516453, "naucs_at_5_std": -0.2680412371134025, "naucs_at_5_diff1": 0.6450662739322527, "naucs_at_10_max": 0.2270774976657352, "naucs_at_10_std": -0.29145658263305096, "naucs_at_10_diff1": 0.5507469654528476, "naucs_at_20_max": -0.005415499533139113, "naucs_at_20_std": -0.2233426704014865, "naucs_at_20_diff1": 0.32791783380019096, "naucs_at_50_max": 0.554154995331464, "naucs_at_50_std": 0.35807656395892007, "naucs_at_50_diff1": 0.12278244631185525, "naucs_at_100_max": 0.554154995331464, "naucs_at_100_std": 0.35807656395892007, "naucs_at_100_diff1": 0.12278244631185525}}
special_tokens_map.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<image>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ }
10
+ ],
11
+ "bos_token": {
12
+ "content": "<bos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "eos_token": {
19
+ "content": "<eos>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": {
26
+ "content": "<pad>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ },
32
+ "unk_token": {
33
+ "content": "<unk>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ }
39
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:172fab587d68c56b63eb3620057c62dfd15e503079ff7fce584692e3fd5bf4da
3
+ size 34600820
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
training_config.yml ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ config:
2
+ (): colpali_engine.trainer.colmodel_training.ColModelTrainingConfig
3
+ output_dir: !path ../../../models/train_colpali2-3b-pt-448-5e5
4
+ processor:
5
+ (): colpali_engine.utils.transformers_wrappers.AllPurposeWrapper
6
+ class_to_instanciate: !ext colpali_engine.models.ColPaliProcessor
7
+ pretrained_model_name_or_path: "./models/colpaligemma2-3b-pt-448-base" # "./models/paligemma-3b-mix-448"
8
+ max_length: 50
9
+ model:
10
+ (): colpali_engine.utils.transformers_wrappers.AllPurposeWrapper
11
+ class_to_instanciate: !ext colpali_engine.models.ColPali
12
+ pretrained_model_name_or_path: "./models/colpaligemma2-3b-pt-448-base"
13
+ torch_dtype: !ext torch.bfloat16
14
+ attn_implementation: "flash_attention_2"
15
+ # device_map: "auto"
16
+ # quantization_config:
17
+ # (): transformers.BitsAndBytesConfig
18
+ # load_in_4bit: true
19
+ # bnb_4bit_quant_type: "nf4"
20
+ # bnb_4bit_compute_dtype: "bfloat16"
21
+ # bnb_4bit_use_double_quant: true
22
+
23
+ dataset_loading_func: !ext colpali_engine.utils.dataset_transformation.load_train_set
24
+ eval_dataset_loader: !import ../data/test_data.yaml
25
+
26
+ max_length: 50
27
+ run_eval: true
28
+
29
+ loss_func:
30
+ (): colpali_engine.loss.late_interaction_losses.ColbertPairwiseCELoss
31
+ tr_args:
32
+ (): transformers.training_args.TrainingArguments
33
+ output_dir: null
34
+ overwrite_output_dir: true
35
+ num_train_epochs: 5
36
+ per_device_train_batch_size: 32
37
+ gradient_checkpointing: true
38
+ gradient_checkpointing_kwargs: { "use_reentrant": false }
39
+ # 6 x 8 gpus = 48 batch size
40
+ # gradient_accumulation_steps: 4
41
+ per_device_eval_batch_size: 32
42
+ eval_strategy: "steps"
43
+ dataloader_num_workers: 16
44
+ # bf16: true
45
+ save_steps: 500
46
+ logging_steps: 10
47
+ eval_steps: 100
48
+ warmup_steps: 100
49
+ learning_rate: 5e-5
50
+ save_total_limit: 1
51
+ resume_from_checkpoint: false
52
+ report_to: "wandb"
53
+
54
+ peft_config:
55
+ (): peft.LoraConfig
56
+ r: 32
57
+ lora_alpha: 32
58
+ lora_dropout: 0.1
59
+ init_lora_weights: "gaussian"
60
+ bias: "none"
61
+ task_type: "FEATURE_EXTRACTION"
62
+ target_modules: '(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)'
63
+ # target_modules: '(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)'
64
+