Upload folder using huggingface_hub
Browse files- README.md +46 -0
- checkpoint-10000/config.json +26 -0
- checkpoint-10000/model.safetensors +3 -0
- checkpoint-10000/optimizer.pt +3 -0
- checkpoint-10000/rng_state.pth +3 -0
- checkpoint-10000/scheduler.pt +3 -0
- checkpoint-10000/trainer_state.json +173 -0
- checkpoint-10000/training_args.bin +3 -0
- checkpoint-15228/config.json +26 -0
- checkpoint-15228/model.safetensors +3 -0
- checkpoint-15228/optimizer.pt +3 -0
- checkpoint-15228/rng_state.pth +3 -0
- checkpoint-15228/scheduler.pt +3 -0
- checkpoint-15228/trainer_state.json +243 -0
- checkpoint-15228/training_args.bin +3 -0
- config.json +26 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- training_args.bin +3 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# EsperBERTo Model Card
|
2 |
+
|
3 |
+
## Model Description
|
4 |
+
EsperBERTo is a RoBERTa-like model specifically trained from scratch on the Esperanto language using a large corpus from the OSCAR and Leipzig Corpora Collection. It is designed to perform masked language modeling and other text-based prediction tasks. This model is ideal for understanding and generating Esperanto text.
|
5 |
+
|
6 |
+
### Datasets
|
7 |
+
- **OSCAR Corpus (Esperanto)**: Extracted from Common Crawl dumps, filtered by language classification.
|
8 |
+
- **Leipzig Corpora Collection (Esperanto)**: Includes texts from news, literature, and Wikipedia.
|
9 |
+
|
10 |
+
### Preprocessing
|
11 |
+
- Trained a byte-level Byte-pair encoding tokenizer with a vocabulary size of 52,000 tokens.
|
12 |
+
|
13 |
+
### Hyperparameters
|
14 |
+
- **Number of Epochs**: 1
|
15 |
+
- **Batch Size per GPU**: 64
|
16 |
+
- **Training Steps for Saving**: 10,000
|
17 |
+
- **Limit of Saved Models**: 2
|
18 |
+
- **Loss Calculation**: Prediction loss only
|
19 |
+
|
20 |
+
### Software and Libraries
|
21 |
+
- **Transformers Library Version**: [Transformers](https://github.com/huggingface/transformers)
|
22 |
+
- **Training Script**: `run_language_modeling.py`
|
23 |
+
|
24 |
+
```python
|
25 |
+
from transformers import pipeline
|
26 |
+
|
27 |
+
fill_mask = pipeline(
|
28 |
+
"fill-mask",
|
29 |
+
model="SamJoshua/EsperBERTo-small",
|
30 |
+
tokenizer="SamJoshua/EsperBERTo-small"
|
31 |
+
)
|
32 |
+
|
33 |
+
fill_mask("Jen la komenco de bela <mask>.")
|
34 |
+
```
|
35 |
+
|
36 |
+
## Evaluation Results
|
37 |
+
The model has not yet been evaluated on a standardized test set. Future updates will include evaluation metrics such as perplexity and accuracy on a held-out validation set.
|
38 |
+
|
39 |
+
## Intended Uses & Limitations
|
40 |
+
**Intended Uses**: This model is intended for researchers, developers, and language enthusiasts who wish to explore Esperanto language processing for tasks like text generation, sentiment analysis, and more.
|
41 |
+
|
42 |
+
**Limitations**:
|
43 |
+
- The model is trained only for one epoch due to computational constraints, which may affect its understanding of more complex language structures.
|
44 |
+
- As the model is trained on public web text, it may inadvertently learn and replicate social biases present in the training data.
|
45 |
+
|
46 |
+
Feel free to contribute to the model by fine-tuning on specific tasks or extending its training with more data or epochs. This model serves as a baseline for further research and development in Esperanto language modeling.
|
checkpoint-10000/config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"bos_token_id": 0,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "roberta",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 6,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"position_embedding_type": "absolute",
|
21 |
+
"torch_dtype": "float32",
|
22 |
+
"transformers_version": "4.43.0.dev0",
|
23 |
+
"type_vocab_size": 1,
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 52000
|
26 |
+
}
|
checkpoint-10000/model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:64346a7029f05e9129b43c14bec6a90eba3b21374f55a75c256e50669c6c6c41
|
3 |
+
size 334030264
|
checkpoint-10000/optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:534703ea7f111a5d936109d1687a96ca4ff44206bc581d7dd713f64b1bdab095
|
3 |
+
size 668124218
|
checkpoint-10000/rng_state.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:27d2e95c87c6ddccfb180e047f9063359bb3c2a42535b516f982953ce9b79904
|
3 |
+
size 14244
|
checkpoint-10000/scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e09a46302ae32307de3843894190190a4e30e2dca5bf8a1efb68f0cf10943996
|
3 |
+
size 1064
|
checkpoint-10000/trainer_state.json
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"best_metric": null,
|
3 |
+
"best_model_checkpoint": null,
|
4 |
+
"epoch": 0.6566850538481744,
|
5 |
+
"eval_steps": 500,
|
6 |
+
"global_step": 10000,
|
7 |
+
"is_hyper_param_search": false,
|
8 |
+
"is_local_process_zero": true,
|
9 |
+
"is_world_process_zero": true,
|
10 |
+
"log_history": [
|
11 |
+
{
|
12 |
+
"epoch": 0.03283425269240872,
|
13 |
+
"grad_norm": 2.148223638534546,
|
14 |
+
"learning_rate": 4.8358287365379566e-05,
|
15 |
+
"loss": 7.8532,
|
16 |
+
"step": 500
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"epoch": 0.06566850538481744,
|
20 |
+
"grad_norm": 1.8999865055084229,
|
21 |
+
"learning_rate": 4.671657473075913e-05,
|
22 |
+
"loss": 7.2619,
|
23 |
+
"step": 1000
|
24 |
+
},
|
25 |
+
{
|
26 |
+
"epoch": 0.09850275807722617,
|
27 |
+
"grad_norm": 1.8169187307357788,
|
28 |
+
"learning_rate": 4.5074862096138694e-05,
|
29 |
+
"loss": 7.078,
|
30 |
+
"step": 1500
|
31 |
+
},
|
32 |
+
{
|
33 |
+
"epoch": 0.1313370107696349,
|
34 |
+
"grad_norm": 2.516470193862915,
|
35 |
+
"learning_rate": 4.343314946151826e-05,
|
36 |
+
"loss": 6.9502,
|
37 |
+
"step": 2000
|
38 |
+
},
|
39 |
+
{
|
40 |
+
"epoch": 0.1641712634620436,
|
41 |
+
"grad_norm": 2.3785781860351562,
|
42 |
+
"learning_rate": 4.179143682689782e-05,
|
43 |
+
"loss": 6.8573,
|
44 |
+
"step": 2500
|
45 |
+
},
|
46 |
+
{
|
47 |
+
"epoch": 0.19700551615445233,
|
48 |
+
"grad_norm": 2.1694188117980957,
|
49 |
+
"learning_rate": 4.0149724192277385e-05,
|
50 |
+
"loss": 6.8,
|
51 |
+
"step": 3000
|
52 |
+
},
|
53 |
+
{
|
54 |
+
"epoch": 0.22983976884686105,
|
55 |
+
"grad_norm": 2.474036455154419,
|
56 |
+
"learning_rate": 3.850801155765695e-05,
|
57 |
+
"loss": 6.744,
|
58 |
+
"step": 3500
|
59 |
+
},
|
60 |
+
{
|
61 |
+
"epoch": 0.2626740215392698,
|
62 |
+
"grad_norm": 2.19565749168396,
|
63 |
+
"learning_rate": 3.686629892303651e-05,
|
64 |
+
"loss": 6.6814,
|
65 |
+
"step": 4000
|
66 |
+
},
|
67 |
+
{
|
68 |
+
"epoch": 0.29550827423167847,
|
69 |
+
"grad_norm": 2.747317314147949,
|
70 |
+
"learning_rate": 3.522458628841608e-05,
|
71 |
+
"loss": 6.5869,
|
72 |
+
"step": 4500
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"epoch": 0.3283425269240872,
|
76 |
+
"grad_norm": 3.399601936340332,
|
77 |
+
"learning_rate": 3.358287365379564e-05,
|
78 |
+
"loss": 6.4711,
|
79 |
+
"step": 5000
|
80 |
+
},
|
81 |
+
{
|
82 |
+
"epoch": 0.3611767796164959,
|
83 |
+
"grad_norm": 2.9853923320770264,
|
84 |
+
"learning_rate": 3.1941161019175205e-05,
|
85 |
+
"loss": 6.3167,
|
86 |
+
"step": 5500
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"epoch": 0.39401103230890466,
|
90 |
+
"grad_norm": 3.57155704498291,
|
91 |
+
"learning_rate": 3.029944838455477e-05,
|
92 |
+
"loss": 6.2061,
|
93 |
+
"step": 6000
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"epoch": 0.42684528500131336,
|
97 |
+
"grad_norm": 3.53355073928833,
|
98 |
+
"learning_rate": 2.8657735749934332e-05,
|
99 |
+
"loss": 6.0795,
|
100 |
+
"step": 6500
|
101 |
+
},
|
102 |
+
{
|
103 |
+
"epoch": 0.4596795376937221,
|
104 |
+
"grad_norm": 3.1891579627990723,
|
105 |
+
"learning_rate": 2.70160231153139e-05,
|
106 |
+
"loss": 5.9704,
|
107 |
+
"step": 7000
|
108 |
+
},
|
109 |
+
{
|
110 |
+
"epoch": 0.4925137903861308,
|
111 |
+
"grad_norm": 3.8728067874908447,
|
112 |
+
"learning_rate": 2.5374310480693457e-05,
|
113 |
+
"loss": 5.8494,
|
114 |
+
"step": 7500
|
115 |
+
},
|
116 |
+
{
|
117 |
+
"epoch": 0.5253480430785396,
|
118 |
+
"grad_norm": 3.752729654312134,
|
119 |
+
"learning_rate": 2.3732597846073024e-05,
|
120 |
+
"loss": 5.7467,
|
121 |
+
"step": 8000
|
122 |
+
},
|
123 |
+
{
|
124 |
+
"epoch": 0.5581822957709482,
|
125 |
+
"grad_norm": 3.093104124069214,
|
126 |
+
"learning_rate": 2.2090885211452588e-05,
|
127 |
+
"loss": 5.6585,
|
128 |
+
"step": 8500
|
129 |
+
},
|
130 |
+
{
|
131 |
+
"epoch": 0.5910165484633569,
|
132 |
+
"grad_norm": 3.5485284328460693,
|
133 |
+
"learning_rate": 2.0449172576832152e-05,
|
134 |
+
"loss": 5.5876,
|
135 |
+
"step": 9000
|
136 |
+
},
|
137 |
+
{
|
138 |
+
"epoch": 0.6238508011557657,
|
139 |
+
"grad_norm": 3.3730971813201904,
|
140 |
+
"learning_rate": 1.8807459942211716e-05,
|
141 |
+
"loss": 5.5165,
|
142 |
+
"step": 9500
|
143 |
+
},
|
144 |
+
{
|
145 |
+
"epoch": 0.6566850538481744,
|
146 |
+
"grad_norm": 3.767274856567383,
|
147 |
+
"learning_rate": 1.716574730759128e-05,
|
148 |
+
"loss": 5.4677,
|
149 |
+
"step": 10000
|
150 |
+
}
|
151 |
+
],
|
152 |
+
"logging_steps": 500,
|
153 |
+
"max_steps": 15228,
|
154 |
+
"num_input_tokens_seen": 0,
|
155 |
+
"num_train_epochs": 1,
|
156 |
+
"save_steps": 10000,
|
157 |
+
"stateful_callbacks": {
|
158 |
+
"TrainerControl": {
|
159 |
+
"args": {
|
160 |
+
"should_epoch_stop": false,
|
161 |
+
"should_evaluate": false,
|
162 |
+
"should_log": false,
|
163 |
+
"should_save": true,
|
164 |
+
"should_training_stop": false
|
165 |
+
},
|
166 |
+
"attributes": {}
|
167 |
+
}
|
168 |
+
},
|
169 |
+
"total_flos": 2.122034184192e+16,
|
170 |
+
"train_batch_size": 64,
|
171 |
+
"trial_name": null,
|
172 |
+
"trial_params": null
|
173 |
+
}
|
checkpoint-10000/training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff57da9fde7b884e48db14f0049fe53a5e93321ff3fc5de4894a45f0aa892541
|
3 |
+
size 5112
|
checkpoint-15228/config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"bos_token_id": 0,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "roberta",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 6,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"position_embedding_type": "absolute",
|
21 |
+
"torch_dtype": "float32",
|
22 |
+
"transformers_version": "4.43.0.dev0",
|
23 |
+
"type_vocab_size": 1,
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 52000
|
26 |
+
}
|
checkpoint-15228/model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ec4c8484678948c3855c5f338a7b607fcb881e00eca1e7848d1fedc28b08300
|
3 |
+
size 334030264
|
checkpoint-15228/optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8fa5ab0ea7b92213009172ca6fb870666c9b4d2db45868bcceed46605971e23a
|
3 |
+
size 668124218
|
checkpoint-15228/rng_state.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9fa9f26f4c75cba39c17dc2ffaaaf19772fec591f3112996eb055f5518a6f16a
|
3 |
+
size 14244
|
checkpoint-15228/scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8bad113ce9254ef8876588cf13f1a80562951be4985e148f8e1ce0f8a5305a3
|
3 |
+
size 1064
|
checkpoint-15228/trainer_state.json
ADDED
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"best_metric": null,
|
3 |
+
"best_model_checkpoint": null,
|
4 |
+
"epoch": 1.0,
|
5 |
+
"eval_steps": 500,
|
6 |
+
"global_step": 15228,
|
7 |
+
"is_hyper_param_search": false,
|
8 |
+
"is_local_process_zero": true,
|
9 |
+
"is_world_process_zero": true,
|
10 |
+
"log_history": [
|
11 |
+
{
|
12 |
+
"epoch": 0.03283425269240872,
|
13 |
+
"grad_norm": 2.148223638534546,
|
14 |
+
"learning_rate": 4.8358287365379566e-05,
|
15 |
+
"loss": 7.8532,
|
16 |
+
"step": 500
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"epoch": 0.06566850538481744,
|
20 |
+
"grad_norm": 1.8999865055084229,
|
21 |
+
"learning_rate": 4.671657473075913e-05,
|
22 |
+
"loss": 7.2619,
|
23 |
+
"step": 1000
|
24 |
+
},
|
25 |
+
{
|
26 |
+
"epoch": 0.09850275807722617,
|
27 |
+
"grad_norm": 1.8169187307357788,
|
28 |
+
"learning_rate": 4.5074862096138694e-05,
|
29 |
+
"loss": 7.078,
|
30 |
+
"step": 1500
|
31 |
+
},
|
32 |
+
{
|
33 |
+
"epoch": 0.1313370107696349,
|
34 |
+
"grad_norm": 2.516470193862915,
|
35 |
+
"learning_rate": 4.343314946151826e-05,
|
36 |
+
"loss": 6.9502,
|
37 |
+
"step": 2000
|
38 |
+
},
|
39 |
+
{
|
40 |
+
"epoch": 0.1641712634620436,
|
41 |
+
"grad_norm": 2.3785781860351562,
|
42 |
+
"learning_rate": 4.179143682689782e-05,
|
43 |
+
"loss": 6.8573,
|
44 |
+
"step": 2500
|
45 |
+
},
|
46 |
+
{
|
47 |
+
"epoch": 0.19700551615445233,
|
48 |
+
"grad_norm": 2.1694188117980957,
|
49 |
+
"learning_rate": 4.0149724192277385e-05,
|
50 |
+
"loss": 6.8,
|
51 |
+
"step": 3000
|
52 |
+
},
|
53 |
+
{
|
54 |
+
"epoch": 0.22983976884686105,
|
55 |
+
"grad_norm": 2.474036455154419,
|
56 |
+
"learning_rate": 3.850801155765695e-05,
|
57 |
+
"loss": 6.744,
|
58 |
+
"step": 3500
|
59 |
+
},
|
60 |
+
{
|
61 |
+
"epoch": 0.2626740215392698,
|
62 |
+
"grad_norm": 2.19565749168396,
|
63 |
+
"learning_rate": 3.686629892303651e-05,
|
64 |
+
"loss": 6.6814,
|
65 |
+
"step": 4000
|
66 |
+
},
|
67 |
+
{
|
68 |
+
"epoch": 0.29550827423167847,
|
69 |
+
"grad_norm": 2.747317314147949,
|
70 |
+
"learning_rate": 3.522458628841608e-05,
|
71 |
+
"loss": 6.5869,
|
72 |
+
"step": 4500
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"epoch": 0.3283425269240872,
|
76 |
+
"grad_norm": 3.399601936340332,
|
77 |
+
"learning_rate": 3.358287365379564e-05,
|
78 |
+
"loss": 6.4711,
|
79 |
+
"step": 5000
|
80 |
+
},
|
81 |
+
{
|
82 |
+
"epoch": 0.3611767796164959,
|
83 |
+
"grad_norm": 2.9853923320770264,
|
84 |
+
"learning_rate": 3.1941161019175205e-05,
|
85 |
+
"loss": 6.3167,
|
86 |
+
"step": 5500
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"epoch": 0.39401103230890466,
|
90 |
+
"grad_norm": 3.57155704498291,
|
91 |
+
"learning_rate": 3.029944838455477e-05,
|
92 |
+
"loss": 6.2061,
|
93 |
+
"step": 6000
|
94 |
+
},
|
95 |
+
{
|
96 |
+
"epoch": 0.42684528500131336,
|
97 |
+
"grad_norm": 3.53355073928833,
|
98 |
+
"learning_rate": 2.8657735749934332e-05,
|
99 |
+
"loss": 6.0795,
|
100 |
+
"step": 6500
|
101 |
+
},
|
102 |
+
{
|
103 |
+
"epoch": 0.4596795376937221,
|
104 |
+
"grad_norm": 3.1891579627990723,
|
105 |
+
"learning_rate": 2.70160231153139e-05,
|
106 |
+
"loss": 5.9704,
|
107 |
+
"step": 7000
|
108 |
+
},
|
109 |
+
{
|
110 |
+
"epoch": 0.4925137903861308,
|
111 |
+
"grad_norm": 3.8728067874908447,
|
112 |
+
"learning_rate": 2.5374310480693457e-05,
|
113 |
+
"loss": 5.8494,
|
114 |
+
"step": 7500
|
115 |
+
},
|
116 |
+
{
|
117 |
+
"epoch": 0.5253480430785396,
|
118 |
+
"grad_norm": 3.752729654312134,
|
119 |
+
"learning_rate": 2.3732597846073024e-05,
|
120 |
+
"loss": 5.7467,
|
121 |
+
"step": 8000
|
122 |
+
},
|
123 |
+
{
|
124 |
+
"epoch": 0.5581822957709482,
|
125 |
+
"grad_norm": 3.093104124069214,
|
126 |
+
"learning_rate": 2.2090885211452588e-05,
|
127 |
+
"loss": 5.6585,
|
128 |
+
"step": 8500
|
129 |
+
},
|
130 |
+
{
|
131 |
+
"epoch": 0.5910165484633569,
|
132 |
+
"grad_norm": 3.5485284328460693,
|
133 |
+
"learning_rate": 2.0449172576832152e-05,
|
134 |
+
"loss": 5.5876,
|
135 |
+
"step": 9000
|
136 |
+
},
|
137 |
+
{
|
138 |
+
"epoch": 0.6238508011557657,
|
139 |
+
"grad_norm": 3.3730971813201904,
|
140 |
+
"learning_rate": 1.8807459942211716e-05,
|
141 |
+
"loss": 5.5165,
|
142 |
+
"step": 9500
|
143 |
+
},
|
144 |
+
{
|
145 |
+
"epoch": 0.6566850538481744,
|
146 |
+
"grad_norm": 3.767274856567383,
|
147 |
+
"learning_rate": 1.716574730759128e-05,
|
148 |
+
"loss": 5.4677,
|
149 |
+
"step": 10000
|
150 |
+
},
|
151 |
+
{
|
152 |
+
"epoch": 0.6895193065405831,
|
153 |
+
"grad_norm": 3.7977945804595947,
|
154 |
+
"learning_rate": 1.5524034672970843e-05,
|
155 |
+
"loss": 5.4096,
|
156 |
+
"step": 10500
|
157 |
+
},
|
158 |
+
{
|
159 |
+
"epoch": 0.7223535592329918,
|
160 |
+
"grad_norm": 3.5391793251037598,
|
161 |
+
"learning_rate": 1.3882322038350407e-05,
|
162 |
+
"loss": 5.3555,
|
163 |
+
"step": 11000
|
164 |
+
},
|
165 |
+
{
|
166 |
+
"epoch": 0.7551878119254006,
|
167 |
+
"grad_norm": 3.8988144397735596,
|
168 |
+
"learning_rate": 1.2240609403729971e-05,
|
169 |
+
"loss": 5.3476,
|
170 |
+
"step": 11500
|
171 |
+
},
|
172 |
+
{
|
173 |
+
"epoch": 0.7880220646178093,
|
174 |
+
"grad_norm": 4.032622337341309,
|
175 |
+
"learning_rate": 1.0598896769109535e-05,
|
176 |
+
"loss": 5.2971,
|
177 |
+
"step": 12000
|
178 |
+
},
|
179 |
+
{
|
180 |
+
"epoch": 0.820856317310218,
|
181 |
+
"grad_norm": 3.8639235496520996,
|
182 |
+
"learning_rate": 8.957184134489099e-06,
|
183 |
+
"loss": 5.2853,
|
184 |
+
"step": 12500
|
185 |
+
},
|
186 |
+
{
|
187 |
+
"epoch": 0.8536905700026267,
|
188 |
+
"grad_norm": 4.275233268737793,
|
189 |
+
"learning_rate": 7.315471499868663e-06,
|
190 |
+
"loss": 5.2745,
|
191 |
+
"step": 13000
|
192 |
+
},
|
193 |
+
{
|
194 |
+
"epoch": 0.8865248226950354,
|
195 |
+
"grad_norm": 3.4154789447784424,
|
196 |
+
"learning_rate": 5.673758865248227e-06,
|
197 |
+
"loss": 5.2396,
|
198 |
+
"step": 13500
|
199 |
+
},
|
200 |
+
{
|
201 |
+
"epoch": 0.9193590753874442,
|
202 |
+
"grad_norm": 3.5738344192504883,
|
203 |
+
"learning_rate": 4.032046230627791e-06,
|
204 |
+
"loss": 5.2383,
|
205 |
+
"step": 14000
|
206 |
+
},
|
207 |
+
{
|
208 |
+
"epoch": 0.9521933280798529,
|
209 |
+
"grad_norm": 3.526001214981079,
|
210 |
+
"learning_rate": 2.390333596007355e-06,
|
211 |
+
"loss": 5.2186,
|
212 |
+
"step": 14500
|
213 |
+
},
|
214 |
+
{
|
215 |
+
"epoch": 0.9850275807722616,
|
216 |
+
"grad_norm": 3.600189208984375,
|
217 |
+
"learning_rate": 7.486209613869189e-07,
|
218 |
+
"loss": 5.2131,
|
219 |
+
"step": 15000
|
220 |
+
}
|
221 |
+
],
|
222 |
+
"logging_steps": 500,
|
223 |
+
"max_steps": 15228,
|
224 |
+
"num_input_tokens_seen": 0,
|
225 |
+
"num_train_epochs": 1,
|
226 |
+
"save_steps": 10000,
|
227 |
+
"stateful_callbacks": {
|
228 |
+
"TrainerControl": {
|
229 |
+
"args": {
|
230 |
+
"should_epoch_stop": false,
|
231 |
+
"should_evaluate": false,
|
232 |
+
"should_log": false,
|
233 |
+
"should_save": true,
|
234 |
+
"should_training_stop": true
|
235 |
+
},
|
236 |
+
"attributes": {}
|
237 |
+
}
|
238 |
+
},
|
239 |
+
"total_flos": 3.231269529606144e+16,
|
240 |
+
"train_batch_size": 64,
|
241 |
+
"trial_name": null,
|
242 |
+
"trial_params": null
|
243 |
+
}
|
checkpoint-15228/training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff57da9fde7b884e48db14f0049fe53a5e93321ff3fc5de4894a45f0aa892541
|
3 |
+
size 5112
|
config.json
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"bos_token_id": 0,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 3072,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"max_position_embeddings": 514,
|
16 |
+
"model_type": "roberta",
|
17 |
+
"num_attention_heads": 12,
|
18 |
+
"num_hidden_layers": 6,
|
19 |
+
"pad_token_id": 1,
|
20 |
+
"position_embedding_type": "absolute",
|
21 |
+
"torch_dtype": "float32",
|
22 |
+
"transformers_version": "4.43.0.dev0",
|
23 |
+
"type_vocab_size": 1,
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 52000
|
26 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ec4c8484678948c3855c5f338a7b607fcb881e00eca1e7848d1fedc28b08300
|
3 |
+
size 334030264
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ff57da9fde7b884e48db14f0049fe53a5e93321ff3fc5de4894a45f0aa892541
|
3 |
+
size 5112
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|