root commited on
Commit
f7400ff
1 Parent(s): ad4369b
Files changed (6) hide show
  1. dev.tsv +0 -0
  2. final-model.pt +3 -0
  3. loss.tsv +11 -0
  4. test.tsv +0 -0
  5. training.log +813 -0
  6. weights.txt +0 -0
dev.tsv ADDED
The diff for this file is too large to render. See raw diff
 
final-model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:275ec01b9b537e09b63e7772738dc771b0547883a2bcda0424d4098cf7eb8720
3
+ size 2256883501
loss.tsv ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
2
+ 1 00:30:30 4 0.0000 0.7202729176824617 0.20562097430229187 0.05 0.0014 0.0027 0.0014
3
+ 2 00:32:15 4 0.0000 0.3212406154600784 0.15934991836547852 0.1765 0.0042 0.0082 0.0041
4
+ 3 00:34:01 4 0.0000 0.2923256346762247 0.14386053383350372 0.2154 0.0393 0.0664 0.0344
5
+ 4 00:35:45 4 0.0000 0.2778034171537818 0.13249367475509644 0.2737 0.0687 0.1099 0.0582
6
+ 5 00:37:30 4 0.0000 0.26510193813684124 0.1335981786251068 0.2814 0.1038 0.1516 0.0824
7
+ 6 00:39:15 4 0.0000 0.25729809377259627 0.12874221801757812 0.3404 0.1571 0.215 0.121
8
+ 7 00:40:59 4 0.0000 0.25640539444537386 0.12849482893943787 0.372 0.1935 0.2546 0.1462
9
+ 8 00:42:45 4 0.0000 0.2515904317709163 0.13098381459712982 0.3446 0.2006 0.2535 0.1453
10
+ 9 00:44:30 4 0.0000 0.25032100312074507 0.1269032210111618 0.3832 0.1795 0.2445 0.1397
11
+ 10 00:46:15 4 0.0000 0.24774755008128432 0.12706945836544037 0.3887 0.1837 0.2495 0.143
test.tsv ADDED
The diff for this file is too large to render. See raw diff
 
training.log ADDED
@@ -0,0 +1,813 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-04-25 00:28:46,333 ----------------------------------------------------------------------------------------------------
2
+ 2022-04-25 00:28:46,337 Model: "SequenceTagger(
3
+ (embeddings): TransformerWordEmbeddings(
4
+ (model): XLMRobertaModel(
5
+ (embeddings): RobertaEmbeddings(
6
+ (word_embeddings): Embedding(250002, 1024, padding_idx=1)
7
+ (position_embeddings): Embedding(514, 1024, padding_idx=1)
8
+ (token_type_embeddings): Embedding(1, 1024)
9
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
10
+ (dropout): Dropout(p=0.1, inplace=False)
11
+ )
12
+ (encoder): RobertaEncoder(
13
+ (layer): ModuleList(
14
+ (0): RobertaLayer(
15
+ (attention): RobertaAttention(
16
+ (self): RobertaSelfAttention(
17
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
18
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
19
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
20
+ (dropout): Dropout(p=0.1, inplace=False)
21
+ )
22
+ (output): RobertaSelfOutput(
23
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
24
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
25
+ (dropout): Dropout(p=0.1, inplace=False)
26
+ )
27
+ )
28
+ (intermediate): RobertaIntermediate(
29
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
30
+ (intermediate_act_fn): GELUActivation()
31
+ )
32
+ (output): RobertaOutput(
33
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
34
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
35
+ (dropout): Dropout(p=0.1, inplace=False)
36
+ )
37
+ )
38
+ (1): RobertaLayer(
39
+ (attention): RobertaAttention(
40
+ (self): RobertaSelfAttention(
41
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
42
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
43
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
44
+ (dropout): Dropout(p=0.1, inplace=False)
45
+ )
46
+ (output): RobertaSelfOutput(
47
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
48
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
49
+ (dropout): Dropout(p=0.1, inplace=False)
50
+ )
51
+ )
52
+ (intermediate): RobertaIntermediate(
53
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
54
+ (intermediate_act_fn): GELUActivation()
55
+ )
56
+ (output): RobertaOutput(
57
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
58
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
59
+ (dropout): Dropout(p=0.1, inplace=False)
60
+ )
61
+ )
62
+ (2): RobertaLayer(
63
+ (attention): RobertaAttention(
64
+ (self): RobertaSelfAttention(
65
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
66
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
67
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
68
+ (dropout): Dropout(p=0.1, inplace=False)
69
+ )
70
+ (output): RobertaSelfOutput(
71
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
72
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
73
+ (dropout): Dropout(p=0.1, inplace=False)
74
+ )
75
+ )
76
+ (intermediate): RobertaIntermediate(
77
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
78
+ (intermediate_act_fn): GELUActivation()
79
+ )
80
+ (output): RobertaOutput(
81
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
82
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
83
+ (dropout): Dropout(p=0.1, inplace=False)
84
+ )
85
+ )
86
+ (3): RobertaLayer(
87
+ (attention): RobertaAttention(
88
+ (self): RobertaSelfAttention(
89
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
90
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
91
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
92
+ (dropout): Dropout(p=0.1, inplace=False)
93
+ )
94
+ (output): RobertaSelfOutput(
95
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
96
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
97
+ (dropout): Dropout(p=0.1, inplace=False)
98
+ )
99
+ )
100
+ (intermediate): RobertaIntermediate(
101
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
102
+ (intermediate_act_fn): GELUActivation()
103
+ )
104
+ (output): RobertaOutput(
105
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
106
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
107
+ (dropout): Dropout(p=0.1, inplace=False)
108
+ )
109
+ )
110
+ (4): RobertaLayer(
111
+ (attention): RobertaAttention(
112
+ (self): RobertaSelfAttention(
113
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
114
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
115
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
116
+ (dropout): Dropout(p=0.1, inplace=False)
117
+ )
118
+ (output): RobertaSelfOutput(
119
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
120
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
121
+ (dropout): Dropout(p=0.1, inplace=False)
122
+ )
123
+ )
124
+ (intermediate): RobertaIntermediate(
125
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
126
+ (intermediate_act_fn): GELUActivation()
127
+ )
128
+ (output): RobertaOutput(
129
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
130
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
131
+ (dropout): Dropout(p=0.1, inplace=False)
132
+ )
133
+ )
134
+ (5): RobertaLayer(
135
+ (attention): RobertaAttention(
136
+ (self): RobertaSelfAttention(
137
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
138
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
139
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
140
+ (dropout): Dropout(p=0.1, inplace=False)
141
+ )
142
+ (output): RobertaSelfOutput(
143
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
144
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
145
+ (dropout): Dropout(p=0.1, inplace=False)
146
+ )
147
+ )
148
+ (intermediate): RobertaIntermediate(
149
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
150
+ (intermediate_act_fn): GELUActivation()
151
+ )
152
+ (output): RobertaOutput(
153
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
154
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
155
+ (dropout): Dropout(p=0.1, inplace=False)
156
+ )
157
+ )
158
+ (6): RobertaLayer(
159
+ (attention): RobertaAttention(
160
+ (self): RobertaSelfAttention(
161
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
162
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
163
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
164
+ (dropout): Dropout(p=0.1, inplace=False)
165
+ )
166
+ (output): RobertaSelfOutput(
167
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
168
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
169
+ (dropout): Dropout(p=0.1, inplace=False)
170
+ )
171
+ )
172
+ (intermediate): RobertaIntermediate(
173
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
174
+ (intermediate_act_fn): GELUActivation()
175
+ )
176
+ (output): RobertaOutput(
177
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
178
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
179
+ (dropout): Dropout(p=0.1, inplace=False)
180
+ )
181
+ )
182
+ (7): RobertaLayer(
183
+ (attention): RobertaAttention(
184
+ (self): RobertaSelfAttention(
185
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
186
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
187
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
188
+ (dropout): Dropout(p=0.1, inplace=False)
189
+ )
190
+ (output): RobertaSelfOutput(
191
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
192
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
193
+ (dropout): Dropout(p=0.1, inplace=False)
194
+ )
195
+ )
196
+ (intermediate): RobertaIntermediate(
197
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
198
+ (intermediate_act_fn): GELUActivation()
199
+ )
200
+ (output): RobertaOutput(
201
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
202
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
203
+ (dropout): Dropout(p=0.1, inplace=False)
204
+ )
205
+ )
206
+ (8): RobertaLayer(
207
+ (attention): RobertaAttention(
208
+ (self): RobertaSelfAttention(
209
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
210
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
211
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
212
+ (dropout): Dropout(p=0.1, inplace=False)
213
+ )
214
+ (output): RobertaSelfOutput(
215
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
216
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
217
+ (dropout): Dropout(p=0.1, inplace=False)
218
+ )
219
+ )
220
+ (intermediate): RobertaIntermediate(
221
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
222
+ (intermediate_act_fn): GELUActivation()
223
+ )
224
+ (output): RobertaOutput(
225
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
226
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
227
+ (dropout): Dropout(p=0.1, inplace=False)
228
+ )
229
+ )
230
+ (9): RobertaLayer(
231
+ (attention): RobertaAttention(
232
+ (self): RobertaSelfAttention(
233
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
234
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
235
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
236
+ (dropout): Dropout(p=0.1, inplace=False)
237
+ )
238
+ (output): RobertaSelfOutput(
239
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
240
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
241
+ (dropout): Dropout(p=0.1, inplace=False)
242
+ )
243
+ )
244
+ (intermediate): RobertaIntermediate(
245
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
246
+ (intermediate_act_fn): GELUActivation()
247
+ )
248
+ (output): RobertaOutput(
249
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
250
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
251
+ (dropout): Dropout(p=0.1, inplace=False)
252
+ )
253
+ )
254
+ (10): RobertaLayer(
255
+ (attention): RobertaAttention(
256
+ (self): RobertaSelfAttention(
257
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
258
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
259
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
260
+ (dropout): Dropout(p=0.1, inplace=False)
261
+ )
262
+ (output): RobertaSelfOutput(
263
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
264
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
265
+ (dropout): Dropout(p=0.1, inplace=False)
266
+ )
267
+ )
268
+ (intermediate): RobertaIntermediate(
269
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
270
+ (intermediate_act_fn): GELUActivation()
271
+ )
272
+ (output): RobertaOutput(
273
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
274
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
275
+ (dropout): Dropout(p=0.1, inplace=False)
276
+ )
277
+ )
278
+ (11): RobertaLayer(
279
+ (attention): RobertaAttention(
280
+ (self): RobertaSelfAttention(
281
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
282
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
283
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
284
+ (dropout): Dropout(p=0.1, inplace=False)
285
+ )
286
+ (output): RobertaSelfOutput(
287
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
288
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
289
+ (dropout): Dropout(p=0.1, inplace=False)
290
+ )
291
+ )
292
+ (intermediate): RobertaIntermediate(
293
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
294
+ (intermediate_act_fn): GELUActivation()
295
+ )
296
+ (output): RobertaOutput(
297
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
298
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
299
+ (dropout): Dropout(p=0.1, inplace=False)
300
+ )
301
+ )
302
+ (12): RobertaLayer(
303
+ (attention): RobertaAttention(
304
+ (self): RobertaSelfAttention(
305
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
306
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
307
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
308
+ (dropout): Dropout(p=0.1, inplace=False)
309
+ )
310
+ (output): RobertaSelfOutput(
311
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
312
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
313
+ (dropout): Dropout(p=0.1, inplace=False)
314
+ )
315
+ )
316
+ (intermediate): RobertaIntermediate(
317
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
318
+ (intermediate_act_fn): GELUActivation()
319
+ )
320
+ (output): RobertaOutput(
321
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
322
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
323
+ (dropout): Dropout(p=0.1, inplace=False)
324
+ )
325
+ )
326
+ (13): RobertaLayer(
327
+ (attention): RobertaAttention(
328
+ (self): RobertaSelfAttention(
329
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
330
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
331
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
332
+ (dropout): Dropout(p=0.1, inplace=False)
333
+ )
334
+ (output): RobertaSelfOutput(
335
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
336
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
337
+ (dropout): Dropout(p=0.1, inplace=False)
338
+ )
339
+ )
340
+ (intermediate): RobertaIntermediate(
341
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
342
+ (intermediate_act_fn): GELUActivation()
343
+ )
344
+ (output): RobertaOutput(
345
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
346
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
347
+ (dropout): Dropout(p=0.1, inplace=False)
348
+ )
349
+ )
350
+ (14): RobertaLayer(
351
+ (attention): RobertaAttention(
352
+ (self): RobertaSelfAttention(
353
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
354
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
355
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
356
+ (dropout): Dropout(p=0.1, inplace=False)
357
+ )
358
+ (output): RobertaSelfOutput(
359
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
360
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
361
+ (dropout): Dropout(p=0.1, inplace=False)
362
+ )
363
+ )
364
+ (intermediate): RobertaIntermediate(
365
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
366
+ (intermediate_act_fn): GELUActivation()
367
+ )
368
+ (output): RobertaOutput(
369
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
370
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
371
+ (dropout): Dropout(p=0.1, inplace=False)
372
+ )
373
+ )
374
+ (15): RobertaLayer(
375
+ (attention): RobertaAttention(
376
+ (self): RobertaSelfAttention(
377
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
378
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
379
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
380
+ (dropout): Dropout(p=0.1, inplace=False)
381
+ )
382
+ (output): RobertaSelfOutput(
383
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
384
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
385
+ (dropout): Dropout(p=0.1, inplace=False)
386
+ )
387
+ )
388
+ (intermediate): RobertaIntermediate(
389
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
390
+ (intermediate_act_fn): GELUActivation()
391
+ )
392
+ (output): RobertaOutput(
393
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
394
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
395
+ (dropout): Dropout(p=0.1, inplace=False)
396
+ )
397
+ )
398
+ (16): RobertaLayer(
399
+ (attention): RobertaAttention(
400
+ (self): RobertaSelfAttention(
401
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
402
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
403
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
404
+ (dropout): Dropout(p=0.1, inplace=False)
405
+ )
406
+ (output): RobertaSelfOutput(
407
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
408
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
409
+ (dropout): Dropout(p=0.1, inplace=False)
410
+ )
411
+ )
412
+ (intermediate): RobertaIntermediate(
413
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
414
+ (intermediate_act_fn): GELUActivation()
415
+ )
416
+ (output): RobertaOutput(
417
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
418
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
419
+ (dropout): Dropout(p=0.1, inplace=False)
420
+ )
421
+ )
422
+ (17): RobertaLayer(
423
+ (attention): RobertaAttention(
424
+ (self): RobertaSelfAttention(
425
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
426
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
427
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
428
+ (dropout): Dropout(p=0.1, inplace=False)
429
+ )
430
+ (output): RobertaSelfOutput(
431
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
432
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
433
+ (dropout): Dropout(p=0.1, inplace=False)
434
+ )
435
+ )
436
+ (intermediate): RobertaIntermediate(
437
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
438
+ (intermediate_act_fn): GELUActivation()
439
+ )
440
+ (output): RobertaOutput(
441
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
442
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
443
+ (dropout): Dropout(p=0.1, inplace=False)
444
+ )
445
+ )
446
+ (18): RobertaLayer(
447
+ (attention): RobertaAttention(
448
+ (self): RobertaSelfAttention(
449
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
450
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
451
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
452
+ (dropout): Dropout(p=0.1, inplace=False)
453
+ )
454
+ (output): RobertaSelfOutput(
455
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
456
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
457
+ (dropout): Dropout(p=0.1, inplace=False)
458
+ )
459
+ )
460
+ (intermediate): RobertaIntermediate(
461
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
462
+ (intermediate_act_fn): GELUActivation()
463
+ )
464
+ (output): RobertaOutput(
465
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
466
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
467
+ (dropout): Dropout(p=0.1, inplace=False)
468
+ )
469
+ )
470
+ (19): RobertaLayer(
471
+ (attention): RobertaAttention(
472
+ (self): RobertaSelfAttention(
473
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
474
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
475
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
476
+ (dropout): Dropout(p=0.1, inplace=False)
477
+ )
478
+ (output): RobertaSelfOutput(
479
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
480
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
481
+ (dropout): Dropout(p=0.1, inplace=False)
482
+ )
483
+ )
484
+ (intermediate): RobertaIntermediate(
485
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
486
+ (intermediate_act_fn): GELUActivation()
487
+ )
488
+ (output): RobertaOutput(
489
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
490
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
491
+ (dropout): Dropout(p=0.1, inplace=False)
492
+ )
493
+ )
494
+ (20): RobertaLayer(
495
+ (attention): RobertaAttention(
496
+ (self): RobertaSelfAttention(
497
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
498
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
499
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
500
+ (dropout): Dropout(p=0.1, inplace=False)
501
+ )
502
+ (output): RobertaSelfOutput(
503
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
504
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
505
+ (dropout): Dropout(p=0.1, inplace=False)
506
+ )
507
+ )
508
+ (intermediate): RobertaIntermediate(
509
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
510
+ (intermediate_act_fn): GELUActivation()
511
+ )
512
+ (output): RobertaOutput(
513
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
514
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
515
+ (dropout): Dropout(p=0.1, inplace=False)
516
+ )
517
+ )
518
+ (21): RobertaLayer(
519
+ (attention): RobertaAttention(
520
+ (self): RobertaSelfAttention(
521
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
522
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
523
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
524
+ (dropout): Dropout(p=0.1, inplace=False)
525
+ )
526
+ (output): RobertaSelfOutput(
527
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
528
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
529
+ (dropout): Dropout(p=0.1, inplace=False)
530
+ )
531
+ )
532
+ (intermediate): RobertaIntermediate(
533
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
534
+ (intermediate_act_fn): GELUActivation()
535
+ )
536
+ (output): RobertaOutput(
537
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
538
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
539
+ (dropout): Dropout(p=0.1, inplace=False)
540
+ )
541
+ )
542
+ (22): RobertaLayer(
543
+ (attention): RobertaAttention(
544
+ (self): RobertaSelfAttention(
545
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
546
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
547
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
548
+ (dropout): Dropout(p=0.1, inplace=False)
549
+ )
550
+ (output): RobertaSelfOutput(
551
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
552
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
553
+ (dropout): Dropout(p=0.1, inplace=False)
554
+ )
555
+ )
556
+ (intermediate): RobertaIntermediate(
557
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
558
+ (intermediate_act_fn): GELUActivation()
559
+ )
560
+ (output): RobertaOutput(
561
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
562
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
563
+ (dropout): Dropout(p=0.1, inplace=False)
564
+ )
565
+ )
566
+ (23): RobertaLayer(
567
+ (attention): RobertaAttention(
568
+ (self): RobertaSelfAttention(
569
+ (query): Linear(in_features=1024, out_features=1024, bias=True)
570
+ (key): Linear(in_features=1024, out_features=1024, bias=True)
571
+ (value): Linear(in_features=1024, out_features=1024, bias=True)
572
+ (dropout): Dropout(p=0.1, inplace=False)
573
+ )
574
+ (output): RobertaSelfOutput(
575
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
576
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
577
+ (dropout): Dropout(p=0.1, inplace=False)
578
+ )
579
+ )
580
+ (intermediate): RobertaIntermediate(
581
+ (dense): Linear(in_features=1024, out_features=4096, bias=True)
582
+ (intermediate_act_fn): GELUActivation()
583
+ )
584
+ (output): RobertaOutput(
585
+ (dense): Linear(in_features=4096, out_features=1024, bias=True)
586
+ (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
587
+ (dropout): Dropout(p=0.1, inplace=False)
588
+ )
589
+ )
590
+ )
591
+ )
592
+ (pooler): RobertaPooler(
593
+ (dense): Linear(in_features=1024, out_features=1024, bias=True)
594
+ (activation): Tanh()
595
+ )
596
+ )
597
+ )
598
+ (word_dropout): WordDropout(p=0.05)
599
+ (locked_dropout): LockedDropout(p=0.5)
600
+ (linear): Linear(in_features=1024, out_features=20, bias=True)
601
+ (loss_function): CrossEntropyLoss()
602
+ )"
603
+ 2022-04-25 00:28:46,337 ----------------------------------------------------------------------------------------------------
604
+ 2022-04-25 00:28:46,338 Corpus: "Corpus: 352 train + 50 dev + 67 test sentences"
605
+ 2022-04-25 00:28:46,338 ----------------------------------------------------------------------------------------------------
606
+ 2022-04-25 00:28:46,339 Parameters:
607
+ 2022-04-25 00:28:46,339 - learning_rate: "0.000005"
608
+ 2022-04-25 00:28:46,340 - mini_batch_size: "4"
609
+ 2022-04-25 00:28:46,340 - patience: "3"
610
+ 2022-04-25 00:28:46,340 - anneal_factor: "0.5"
611
+ 2022-04-25 00:28:46,341 - max_epochs: "10"
612
+ 2022-04-25 00:28:46,341 - shuffle: "True"
613
+ 2022-04-25 00:28:46,342 - train_with_dev: "False"
614
+ 2022-04-25 00:28:46,342 - batch_growth_annealing: "False"
615
+ 2022-04-25 00:28:46,343 ----------------------------------------------------------------------------------------------------
616
+ 2022-04-25 00:28:46,343 Model training base path: "resources/taggers/ner_xlm_finedtuned_ck1"
617
+ 2022-04-25 00:28:46,344 ----------------------------------------------------------------------------------------------------
618
+ 2022-04-25 00:28:46,345 Device: cuda:0
619
+ 2022-04-25 00:28:46,345 ----------------------------------------------------------------------------------------------------
620
+ 2022-04-25 00:28:46,346 Embeddings storage mode: none
621
+ 2022-04-25 00:28:46,346 ----------------------------------------------------------------------------------------------------
622
+ 2022-04-25 00:28:55,605 epoch 1 - iter 8/88 - loss 1.25822871 - samples/sec: 3.46 - lr: 0.000000
623
+ 2022-04-25 00:29:03,857 epoch 1 - iter 16/88 - loss 1.22365524 - samples/sec: 3.88 - lr: 0.000001
624
+ 2022-04-25 00:29:13,839 epoch 1 - iter 24/88 - loss 1.18822646 - samples/sec: 3.21 - lr: 0.000001
625
+ 2022-04-25 00:29:23,244 epoch 1 - iter 32/88 - loss 1.12798044 - samples/sec: 3.40 - lr: 0.000002
626
+ 2022-04-25 00:29:31,472 epoch 1 - iter 40/88 - loss 1.05740151 - samples/sec: 3.89 - lr: 0.000002
627
+ 2022-04-25 00:29:38,751 epoch 1 - iter 48/88 - loss 0.99049744 - samples/sec: 4.40 - lr: 0.000003
628
+ 2022-04-25 00:29:46,982 epoch 1 - iter 56/88 - loss 0.92466364 - samples/sec: 3.89 - lr: 0.000003
629
+ 2022-04-25 00:29:54,849 epoch 1 - iter 64/88 - loss 0.87012404 - samples/sec: 4.07 - lr: 0.000004
630
+ 2022-04-25 00:30:04,123 epoch 1 - iter 72/88 - loss 0.80738819 - samples/sec: 3.45 - lr: 0.000004
631
+ 2022-04-25 00:30:13,985 epoch 1 - iter 80/88 - loss 0.76049921 - samples/sec: 3.25 - lr: 0.000005
632
+ 2022-04-25 00:30:23,710 epoch 1 - iter 88/88 - loss 0.72027292 - samples/sec: 3.29 - lr: 0.000005
633
+ 2022-04-25 00:30:23,712 ----------------------------------------------------------------------------------------------------
634
+ 2022-04-25 00:30:23,713 EPOCH 1 done: loss 0.7203 - lr 0.000005
635
+ 2022-04-25 00:30:30,732 Evaluating as a multi-label problem: False
636
+ 2022-04-25 00:30:30,742 DEV : loss 0.20562097430229187 - f1-score (micro avg) 0.0027
637
+ 2022-04-25 00:30:30,751 BAD EPOCHS (no improvement): 4
638
+ 2022-04-25 00:30:30,753 ----------------------------------------------------------------------------------------------------
639
+ 2022-04-25 00:30:39,284 epoch 2 - iter 8/88 - loss 0.32586993 - samples/sec: 3.75 - lr: 0.000005
640
+ 2022-04-25 00:30:47,933 epoch 2 - iter 16/88 - loss 0.33892041 - samples/sec: 3.70 - lr: 0.000005
641
+ 2022-04-25 00:30:56,990 epoch 2 - iter 24/88 - loss 0.33672071 - samples/sec: 3.53 - lr: 0.000005
642
+ 2022-04-25 00:31:05,736 epoch 2 - iter 32/88 - loss 0.33060665 - samples/sec: 3.66 - lr: 0.000005
643
+ 2022-04-25 00:31:13,937 epoch 2 - iter 40/88 - loss 0.33045049 - samples/sec: 3.90 - lr: 0.000005
644
+ 2022-04-25 00:31:23,091 epoch 2 - iter 48/88 - loss 0.32851558 - samples/sec: 3.50 - lr: 0.000005
645
+ 2022-04-25 00:31:31,313 epoch 2 - iter 56/88 - loss 0.32679558 - samples/sec: 3.89 - lr: 0.000005
646
+ 2022-04-25 00:31:41,184 epoch 2 - iter 64/88 - loss 0.32379177 - samples/sec: 3.24 - lr: 0.000005
647
+ 2022-04-25 00:31:49,757 epoch 2 - iter 72/88 - loss 0.32124627 - samples/sec: 3.73 - lr: 0.000005
648
+ 2022-04-25 00:31:57,768 epoch 2 - iter 80/88 - loss 0.32825760 - samples/sec: 4.00 - lr: 0.000004
649
+ 2022-04-25 00:32:08,014 epoch 2 - iter 88/88 - loss 0.32124062 - samples/sec: 3.12 - lr: 0.000004
650
+ 2022-04-25 00:32:08,017 ----------------------------------------------------------------------------------------------------
651
+ 2022-04-25 00:32:08,018 EPOCH 2 done: loss 0.3212 - lr 0.000004
652
+ 2022-04-25 00:32:15,400 Evaluating as a multi-label problem: False
653
+ 2022-04-25 00:32:15,415 DEV : loss 0.15934991836547852 - f1-score (micro avg) 0.0082
654
+ 2022-04-25 00:32:15,428 BAD EPOCHS (no improvement): 4
655
+ 2022-04-25 00:32:15,431 ----------------------------------------------------------------------------------------------------
656
+ 2022-04-25 00:32:25,133 epoch 3 - iter 8/88 - loss 0.26548392 - samples/sec: 3.30 - lr: 0.000004
657
+ 2022-04-25 00:32:33,272 epoch 3 - iter 16/88 - loss 0.28651787 - samples/sec: 3.93 - lr: 0.000004
658
+ 2022-04-25 00:32:41,433 epoch 3 - iter 24/88 - loss 0.29010948 - samples/sec: 3.92 - lr: 0.000004
659
+ 2022-04-25 00:32:50,243 epoch 3 - iter 32/88 - loss 0.29681501 - samples/sec: 3.63 - lr: 0.000004
660
+ 2022-04-25 00:32:59,007 epoch 3 - iter 40/88 - loss 0.29554105 - samples/sec: 3.65 - lr: 0.000004
661
+ 2022-04-25 00:33:07,692 epoch 3 - iter 48/88 - loss 0.29343573 - samples/sec: 3.69 - lr: 0.000004
662
+ 2022-04-25 00:33:16,189 epoch 3 - iter 56/88 - loss 0.29547981 - samples/sec: 3.77 - lr: 0.000004
663
+ 2022-04-25 00:33:25,763 epoch 3 - iter 64/88 - loss 0.28997972 - samples/sec: 3.34 - lr: 0.000004
664
+ 2022-04-25 00:33:36,471 epoch 3 - iter 72/88 - loss 0.29000464 - samples/sec: 2.99 - lr: 0.000004
665
+ 2022-04-25 00:33:45,481 epoch 3 - iter 80/88 - loss 0.29344732 - samples/sec: 3.55 - lr: 0.000004
666
+ 2022-04-25 00:33:53,793 epoch 3 - iter 88/88 - loss 0.29232563 - samples/sec: 3.85 - lr: 0.000004
667
+ 2022-04-25 00:33:53,797 ----------------------------------------------------------------------------------------------------
668
+ 2022-04-25 00:33:53,798 EPOCH 3 done: loss 0.2923 - lr 0.000004
669
+ 2022-04-25 00:34:00,978 Evaluating as a multi-label problem: False
670
+ 2022-04-25 00:34:00,991 DEV : loss 0.14386053383350372 - f1-score (micro avg) 0.0664
671
+ 2022-04-25 00:34:00,999 BAD EPOCHS (no improvement): 4
672
+ 2022-04-25 00:34:01,000 ----------------------------------------------------------------------------------------------------
673
+ 2022-04-25 00:34:09,617 epoch 4 - iter 8/88 - loss 0.32142401 - samples/sec: 3.72 - lr: 0.000004
674
+ 2022-04-25 00:34:17,886 epoch 4 - iter 16/88 - loss 0.30301646 - samples/sec: 3.87 - lr: 0.000004
675
+ 2022-04-25 00:34:27,850 epoch 4 - iter 24/88 - loss 0.28913590 - samples/sec: 3.21 - lr: 0.000004
676
+ 2022-04-25 00:34:35,703 epoch 4 - iter 32/88 - loss 0.29200045 - samples/sec: 4.08 - lr: 0.000004
677
+ 2022-04-25 00:34:44,383 epoch 4 - iter 40/88 - loss 0.28601870 - samples/sec: 3.69 - lr: 0.000004
678
+ 2022-04-25 00:34:53,597 epoch 4 - iter 48/88 - loss 0.28333016 - samples/sec: 3.47 - lr: 0.000004
679
+ 2022-04-25 00:35:02,237 epoch 4 - iter 56/88 - loss 0.28101070 - samples/sec: 3.70 - lr: 0.000004
680
+ 2022-04-25 00:35:11,887 epoch 4 - iter 64/88 - loss 0.27725419 - samples/sec: 3.32 - lr: 0.000003
681
+ 2022-04-25 00:35:20,971 epoch 4 - iter 72/88 - loss 0.27522330 - samples/sec: 3.52 - lr: 0.000003
682
+ 2022-04-25 00:35:29,993 epoch 4 - iter 80/88 - loss 0.27767522 - samples/sec: 3.55 - lr: 0.000003
683
+ 2022-04-25 00:35:38,121 epoch 4 - iter 88/88 - loss 0.27780342 - samples/sec: 3.94 - lr: 0.000003
684
+ 2022-04-25 00:35:38,125 ----------------------------------------------------------------------------------------------------
685
+ 2022-04-25 00:35:38,126 EPOCH 4 done: loss 0.2778 - lr 0.000003
686
+ 2022-04-25 00:35:45,523 Evaluating as a multi-label problem: False
687
+ 2022-04-25 00:35:45,536 DEV : loss 0.13249367475509644 - f1-score (micro avg) 0.1099
688
+ 2022-04-25 00:35:45,545 BAD EPOCHS (no improvement): 4
689
+ 2022-04-25 00:35:45,547 ----------------------------------------------------------------------------------------------------
690
+ 2022-04-25 00:35:55,215 epoch 5 - iter 8/88 - loss 0.26147172 - samples/sec: 3.31 - lr: 0.000003
691
+ 2022-04-25 00:36:05,160 epoch 5 - iter 16/88 - loss 0.26559845 - samples/sec: 3.22 - lr: 0.000003
692
+ 2022-04-25 00:36:13,857 epoch 5 - iter 24/88 - loss 0.26674131 - samples/sec: 3.68 - lr: 0.000003
693
+ 2022-04-25 00:36:22,022 epoch 5 - iter 32/88 - loss 0.26445641 - samples/sec: 3.92 - lr: 0.000003
694
+ 2022-04-25 00:36:29,834 epoch 5 - iter 40/88 - loss 0.26849622 - samples/sec: 4.10 - lr: 0.000003
695
+ 2022-04-25 00:36:38,499 epoch 5 - iter 48/88 - loss 0.26495720 - samples/sec: 3.69 - lr: 0.000003
696
+ 2022-04-25 00:36:46,651 epoch 5 - iter 56/88 - loss 0.26747065 - samples/sec: 3.93 - lr: 0.000003
697
+ 2022-04-25 00:36:56,479 epoch 5 - iter 64/88 - loss 0.26716735 - samples/sec: 3.26 - lr: 0.000003
698
+ 2022-04-25 00:37:05,247 epoch 5 - iter 72/88 - loss 0.26323866 - samples/sec: 3.65 - lr: 0.000003
699
+ 2022-04-25 00:37:14,099 epoch 5 - iter 80/88 - loss 0.26763434 - samples/sec: 3.62 - lr: 0.000003
700
+ 2022-04-25 00:37:23,612 epoch 5 - iter 88/88 - loss 0.26510194 - samples/sec: 3.36 - lr: 0.000003
701
+ 2022-04-25 00:37:23,615 ----------------------------------------------------------------------------------------------------
702
+ 2022-04-25 00:37:23,615 EPOCH 5 done: loss 0.2651 - lr 0.000003
703
+ 2022-04-25 00:37:30,711 Evaluating as a multi-label problem: False
704
+ 2022-04-25 00:37:30,723 DEV : loss 0.1335981786251068 - f1-score (micro avg) 0.1516
705
+ 2022-04-25 00:37:30,734 BAD EPOCHS (no improvement): 4
706
+ 2022-04-25 00:37:30,735 ----------------------------------------------------------------------------------------------------
707
+ 2022-04-25 00:37:39,100 epoch 6 - iter 8/88 - loss 0.25254979 - samples/sec: 3.83 - lr: 0.000003
708
+ 2022-04-25 00:37:48,489 epoch 6 - iter 16/88 - loss 0.24629379 - samples/sec: 3.41 - lr: 0.000003
709
+ 2022-04-25 00:37:56,856 epoch 6 - iter 24/88 - loss 0.25016090 - samples/sec: 3.83 - lr: 0.000003
710
+ 2022-04-25 00:38:06,647 epoch 6 - iter 32/88 - loss 0.25646469 - samples/sec: 3.27 - lr: 0.000003
711
+ 2022-04-25 00:38:14,700 epoch 6 - iter 40/88 - loss 0.25909943 - samples/sec: 3.97 - lr: 0.000003
712
+ 2022-04-25 00:38:23,772 epoch 6 - iter 48/88 - loss 0.25850607 - samples/sec: 3.53 - lr: 0.000002
713
+ 2022-04-25 00:38:32,983 epoch 6 - iter 56/88 - loss 0.25417190 - samples/sec: 3.48 - lr: 0.000002
714
+ 2022-04-25 00:38:42,014 epoch 6 - iter 64/88 - loss 0.25534730 - samples/sec: 3.54 - lr: 0.000002
715
+ 2022-04-25 00:38:49,968 epoch 6 - iter 72/88 - loss 0.25617877 - samples/sec: 4.02 - lr: 0.000002
716
+ 2022-04-25 00:38:58,183 epoch 6 - iter 80/88 - loss 0.25537613 - samples/sec: 3.90 - lr: 0.000002
717
+ 2022-04-25 00:39:07,930 epoch 6 - iter 88/88 - loss 0.25729809 - samples/sec: 3.28 - lr: 0.000002
718
+ 2022-04-25 00:39:07,933 ----------------------------------------------------------------------------------------------------
719
+ 2022-04-25 00:39:07,934 EPOCH 6 done: loss 0.2573 - lr 0.000002
720
+ 2022-04-25 00:39:15,220 Evaluating as a multi-label problem: False
721
+ 2022-04-25 00:39:15,238 DEV : loss 0.12874221801757812 - f1-score (micro avg) 0.215
722
+ 2022-04-25 00:39:15,250 BAD EPOCHS (no improvement): 4
723
+ 2022-04-25 00:39:15,252 ----------------------------------------------------------------------------------------------------
724
+ 2022-04-25 00:39:23,920 epoch 7 - iter 8/88 - loss 0.25032306 - samples/sec: 3.69 - lr: 0.000002
725
+ 2022-04-25 00:39:32,341 epoch 7 - iter 16/88 - loss 0.24173648 - samples/sec: 3.80 - lr: 0.000002
726
+ 2022-04-25 00:39:42,283 epoch 7 - iter 24/88 - loss 0.25674155 - samples/sec: 3.22 - lr: 0.000002
727
+ 2022-04-25 00:39:50,287 epoch 7 - iter 32/88 - loss 0.25221355 - samples/sec: 4.00 - lr: 0.000002
728
+ 2022-04-25 00:39:58,742 epoch 7 - iter 40/88 - loss 0.25534056 - samples/sec: 3.79 - lr: 0.000002
729
+ 2022-04-25 00:40:07,531 epoch 7 - iter 48/88 - loss 0.25396630 - samples/sec: 3.64 - lr: 0.000002
730
+ 2022-04-25 00:40:16,857 epoch 7 - iter 56/88 - loss 0.25506091 - samples/sec: 3.43 - lr: 0.000002
731
+ 2022-04-25 00:40:26,056 epoch 7 - iter 64/88 - loss 0.25606985 - samples/sec: 3.48 - lr: 0.000002
732
+ 2022-04-25 00:40:34,742 epoch 7 - iter 72/88 - loss 0.25690660 - samples/sec: 3.68 - lr: 0.000002
733
+ 2022-04-25 00:40:43,201 epoch 7 - iter 80/88 - loss 0.25644415 - samples/sec: 3.78 - lr: 0.000002
734
+ 2022-04-25 00:40:53,512 epoch 7 - iter 88/88 - loss 0.25640539 - samples/sec: 3.10 - lr: 0.000002
735
+ 2022-04-25 00:40:53,515 ----------------------------------------------------------------------------------------------------
736
+ 2022-04-25 00:40:53,516 EPOCH 7 done: loss 0.2564 - lr 0.000002
737
+ 2022-04-25 00:40:59,919 Evaluating as a multi-label problem: False
738
+ 2022-04-25 00:40:59,934 DEV : loss 0.12849482893943787 - f1-score (micro avg) 0.2546
739
+ 2022-04-25 00:40:59,943 BAD EPOCHS (no improvement): 4
740
+ 2022-04-25 00:40:59,944 ----------------------------------------------------------------------------------------------------
741
+ 2022-04-25 00:41:09,917 epoch 8 - iter 8/88 - loss 0.26072190 - samples/sec: 3.21 - lr: 0.000002
742
+ 2022-04-25 00:41:18,102 epoch 8 - iter 16/88 - loss 0.27005318 - samples/sec: 3.91 - lr: 0.000002
743
+ 2022-04-25 00:41:26,730 epoch 8 - iter 24/88 - loss 0.26735720 - samples/sec: 3.71 - lr: 0.000002
744
+ 2022-04-25 00:41:35,802 epoch 8 - iter 32/88 - loss 0.25981810 - samples/sec: 3.53 - lr: 0.000001
745
+ 2022-04-25 00:41:45,065 epoch 8 - iter 40/88 - loss 0.25497924 - samples/sec: 3.46 - lr: 0.000001
746
+ 2022-04-25 00:41:53,266 epoch 8 - iter 48/88 - loss 0.25297761 - samples/sec: 3.90 - lr: 0.000001
747
+ 2022-04-25 00:42:01,654 epoch 8 - iter 56/88 - loss 0.25588829 - samples/sec: 3.82 - lr: 0.000001
748
+ 2022-04-25 00:42:10,833 epoch 8 - iter 64/88 - loss 0.25234574 - samples/sec: 3.49 - lr: 0.000001
749
+ 2022-04-25 00:42:20,767 epoch 8 - iter 72/88 - loss 0.25437752 - samples/sec: 3.22 - lr: 0.000001
750
+ 2022-04-25 00:42:29,555 epoch 8 - iter 80/88 - loss 0.25358380 - samples/sec: 3.64 - lr: 0.000001
751
+ 2022-04-25 00:42:38,444 epoch 8 - iter 88/88 - loss 0.25159043 - samples/sec: 3.60 - lr: 0.000001
752
+ 2022-04-25 00:42:38,447 ----------------------------------------------------------------------------------------------------
753
+ 2022-04-25 00:42:38,447 EPOCH 8 done: loss 0.2516 - lr 0.000001
754
+ 2022-04-25 00:42:45,466 Evaluating as a multi-label problem: False
755
+ 2022-04-25 00:42:45,478 DEV : loss 0.13098381459712982 - f1-score (micro avg) 0.2535
756
+ 2022-04-25 00:42:45,486 BAD EPOCHS (no improvement): 4
757
+ 2022-04-25 00:42:45,488 ----------------------------------------------------------------------------------------------------
758
+ 2022-04-25 00:42:55,033 epoch 9 - iter 8/88 - loss 0.22931718 - samples/sec: 3.35 - lr: 0.000001
759
+ 2022-04-25 00:43:03,513 epoch 9 - iter 16/88 - loss 0.25355650 - samples/sec: 3.77 - lr: 0.000001
760
+ 2022-04-25 00:43:13,870 epoch 9 - iter 24/88 - loss 0.25289254 - samples/sec: 3.09 - lr: 0.000001
761
+ 2022-04-25 00:43:22,935 epoch 9 - iter 32/88 - loss 0.24994442 - samples/sec: 3.53 - lr: 0.000001
762
+ 2022-04-25 00:43:30,905 epoch 9 - iter 40/88 - loss 0.24795011 - samples/sec: 4.02 - lr: 0.000001
763
+ 2022-04-25 00:43:39,312 epoch 9 - iter 48/88 - loss 0.24733180 - samples/sec: 3.81 - lr: 0.000001
764
+ 2022-04-25 00:43:47,522 epoch 9 - iter 56/88 - loss 0.24885510 - samples/sec: 3.90 - lr: 0.000001
765
+ 2022-04-25 00:43:55,856 epoch 9 - iter 64/88 - loss 0.25085127 - samples/sec: 3.84 - lr: 0.000001
766
+ 2022-04-25 00:44:04,511 epoch 9 - iter 72/88 - loss 0.25141658 - samples/sec: 3.70 - lr: 0.000001
767
+ 2022-04-25 00:44:13,473 epoch 9 - iter 80/88 - loss 0.25114253 - samples/sec: 3.57 - lr: 0.000001
768
+ 2022-04-25 00:44:23,065 epoch 9 - iter 88/88 - loss 0.25032100 - samples/sec: 3.34 - lr: 0.000001
769
+ 2022-04-25 00:44:23,068 ----------------------------------------------------------------------------------------------------
770
+ 2022-04-25 00:44:23,069 EPOCH 9 done: loss 0.2503 - lr 0.000001
771
+ 2022-04-25 00:44:30,828 Evaluating as a multi-label problem: False
772
+ 2022-04-25 00:44:30,844 DEV : loss 0.1269032210111618 - f1-score (micro avg) 0.2445
773
+ 2022-04-25 00:44:30,854 BAD EPOCHS (no improvement): 4
774
+ 2022-04-25 00:44:30,855 ----------------------------------------------------------------------------------------------------
775
+ 2022-04-25 00:44:38,190 epoch 10 - iter 8/88 - loss 0.25877504 - samples/sec: 4.36 - lr: 0.000001
776
+ 2022-04-25 00:44:47,141 epoch 10 - iter 16/88 - loss 0.26538309 - samples/sec: 3.58 - lr: 0.000000
777
+ 2022-04-25 00:44:56,357 epoch 10 - iter 24/88 - loss 0.25992814 - samples/sec: 3.47 - lr: 0.000000
778
+ 2022-04-25 00:45:04,805 epoch 10 - iter 32/88 - loss 0.25024608 - samples/sec: 3.79 - lr: 0.000000
779
+ 2022-04-25 00:45:12,966 epoch 10 - iter 40/88 - loss 0.25450198 - samples/sec: 3.92 - lr: 0.000000
780
+ 2022-04-25 00:45:23,081 epoch 10 - iter 48/88 - loss 0.25508489 - samples/sec: 3.16 - lr: 0.000000
781
+ 2022-04-25 00:45:32,191 epoch 10 - iter 56/88 - loss 0.25273411 - samples/sec: 3.51 - lr: 0.000000
782
+ 2022-04-25 00:45:40,798 epoch 10 - iter 64/88 - loss 0.25090079 - samples/sec: 3.72 - lr: 0.000000
783
+ 2022-04-25 00:45:49,572 epoch 10 - iter 72/88 - loss 0.24954558 - samples/sec: 3.65 - lr: 0.000000
784
+ 2022-04-25 00:45:59,254 epoch 10 - iter 80/88 - loss 0.24933938 - samples/sec: 3.31 - lr: 0.000000
785
+ 2022-04-25 00:46:08,852 epoch 10 - iter 88/88 - loss 0.24774755 - samples/sec: 3.33 - lr: 0.000000
786
+ 2022-04-25 00:46:08,856 ----------------------------------------------------------------------------------------------------
787
+ 2022-04-25 00:46:08,857 EPOCH 10 done: loss 0.2477 - lr 0.000000
788
+ 2022-04-25 00:46:15,919 Evaluating as a multi-label problem: False
789
+ 2022-04-25 00:46:15,935 DEV : loss 0.12706945836544037 - f1-score (micro avg) 0.2495
790
+ 2022-04-25 00:46:15,947 BAD EPOCHS (no improvement): 4
791
+ 2022-04-25 00:46:19,590 ----------------------------------------------------------------------------------------------------
792
+ 2022-04-25 00:46:19,592 Testing using last state of model ...
793
+ 2022-04-25 00:46:29,219 Evaluating as a multi-label problem: False
794
+ 2022-04-25 00:46:29,232 0.4412 0.2257 0.2986 0.1758
795
+ 2022-04-25 00:46:29,232
796
+ Results:
797
+ - F-score (micro) 0.2986
798
+ - F-score (macro) 0.147
799
+ - Accuracy 0.1758
800
+
801
+ By class:
802
+ precision recall f1-score support
803
+
804
+ ORG 0.4718 0.2314 0.3105 687
805
+ LOC 0.3837 0.2171 0.2773 304
806
+ PENT 0.0000 0.0000 0.0000 6
807
+ MISC 0.0000 0.0000 0.0000 0
808
+
809
+ micro avg 0.4412 0.2257 0.2986 997
810
+ macro avg 0.2139 0.1121 0.1470 997
811
+ weighted avg 0.4421 0.2257 0.2985 997
812
+
813
+ 2022-04-25 00:46:29,233 ----------------------------------------------------------------------------------------------------
weights.txt ADDED
File without changes