erdiari commited on
Commit
6119cdb
1 Parent(s): 9942c14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,7 +17,7 @@ VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from
17
  The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
18
  It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
19
 
20
- This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for text paraphrasing task.
21
 
22
  - **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
23
  - **Model type:** Transformer encoder-decoder based on mBART architecture
@@ -48,7 +48,7 @@ The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datas
48
  The fine-tuning dataset is a mixture of [OpenSubtitles](https://huggingface.co/datasets/open_subtitles), [TED Talks (2013)](https://wit3.fbk.eu/home) and [Tatoeba](https://tatoeba.org/en/) datasets.
49
 
50
  ### Limitations
51
- This model is fine-tuned for paraphrasing tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
52
 
53
  ### Training Procedure
54
  Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.
 
17
  The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
18
  It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
19
 
20
+ This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for sentence-level text paraphrasing task.
21
 
22
  - **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
23
  - **Model type:** Transformer encoder-decoder based on mBART architecture
 
48
  The fine-tuning dataset is a mixture of [OpenSubtitles](https://huggingface.co/datasets/open_subtitles), [TED Talks (2013)](https://wit3.fbk.eu/home) and [Tatoeba](https://tatoeba.org/en/) datasets.
49
 
50
  ### Limitations
51
+ This model is fine-tuned for paraphrasing tasks and finetuned in sentence level only. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
52
 
53
  ### Training Procedure
54
  Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.