File size: 1,308 Bytes
5915266 622bbb4 10882af |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
language:
- en
- hi
- multilingual
tags:
- text2text-generation
widget:
- text: What are you doing?
example_title: What are you doing
- text: It is raining heavily.
example_title: It is raining heavily.
- text: How are you?
example_title: How are you?
datasets:
- rvv-karma/English-Hinglish-TOP
license: apache-2.0
pipeline_tag: text2text-generation
---
# English2Hinglish-Flan-T5-Base
This is a finetuned model of [Flan T5 Base](https://huggingface.co/google/flan-t5-base) with [English-Hinglish-TOP](https://huggingface.co/datasets/rvv-karma/English-Hinglish-TOP) dataset.
## Usage
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "rvv-karma/English2Hinglish-Flan-T5-Base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_text = "What are you doing?"
input_ids = tokenizer(input_text, return_tensors="pt")
output_ids = model.generate(**input_ids)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
```
## Fine-tuning script
[Google Colaboratory Notebook](https://colab.research.google.com/drive/11fUHem8r8qe_Ildh2_1XjOEY-Zyoy5j8?usp=sharing)
## References
[DataCamp](https://www.datacamp.com/tutorial/flan-t5-tutorial) |