Update README.md
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ print(tokenizer.decode(outputs[0]))
|
|
42 |
### Training Data
|
43 |
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
|
44 |
|
45 |
-
The fine-tuning dataset is
|
46 |
|
47 |
### Limitations
|
48 |
This model is fine-tuned for paraphrasing tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
|
@@ -68,11 +68,11 @@ Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.
|
|
68 |
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
|
69 |
- **Scheduler**: Linear decay scheduler
|
70 |
- **Dropout**: 0.1
|
71 |
-
- **Learning rate**:
|
72 |
- **Fine-tune epochs**: 20
|
73 |
|
74 |
#### Metrics
|
75 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f8b3c84588fe31f435a92b/
|
76 |
|
77 |
## Citation
|
78 |
```
|
|
|
42 |
### Training Data
|
43 |
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
|
44 |
|
45 |
+
The fine-tuning dataset is the Turkish sections of [MLSum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) and [Wikilingua](https://huggingface.co/datasets/wiki_lingua) datasets.
|
46 |
|
47 |
### Limitations
|
48 |
This model is fine-tuned for paraphrasing tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
|
|
|
68 |
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
|
69 |
- **Scheduler**: Linear decay scheduler
|
70 |
- **Dropout**: 0.1
|
71 |
+
- **Learning rate**: 1e-5
|
72 |
- **Fine-tune epochs**: 20
|
73 |
|
74 |
#### Metrics
|
75 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f8b3c84588fe31f435a92b/nrM_FA3bGk9NAYW_044HW.png)
|
76 |
|
77 |
## Citation
|
78 |
```
|