Update README.md
Browse files
README.md
CHANGED
@@ -82,6 +82,7 @@ output = model.generate(inputs["input_ids"])
|
|
82 |
refined_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
83 |
|
84 |
print(refined_text)
|
|
|
85 |
Training Details
|
86 |
Training Data
|
87 |
The model was fine-tuned on a dataset consisting of 4000 rows of machine-translated text and refined English text. The dataset was designed to focus on translation corrections, ensuring that the model learns to improve translation fluency.
|
|
|
82 |
refined_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
83 |
|
84 |
print(refined_text)
|
85 |
+
```
|
86 |
Training Details
|
87 |
Training Data
|
88 |
The model was fine-tuned on a dataset consisting of 4000 rows of machine-translated text and refined English text. The dataset was designed to focus on translation corrections, ensuring that the model learns to improve translation fluency.
|