Edit model card
import torch
import transformers

use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")

t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("AlexWortega/FlanFred")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("AlexWortega/FlanFred")

def generate_text(input_str, tokenizer, model, device, max_length=50):
    # encode the input string to model's input_ids
    input_ids = tokenizer.encode(input_str, return_tensors='pt').to(device)
    
    # generate text
    with torch.no_grad():
        outputs = model.generate(input_ids=input_ids, max_length=max_length, num_return_sequences=1, temperature=0.7, do_sample=True)
    
    # decode the output and return the text
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# usage:
input_str = "Hello, how are you?"
print(generate_text(input_str, t5_tokenizer, t5_model, device))

Metrics:

| Metric        | flanfred | siberianfred  | fred  |
| ------------- | ----- |------ |----- |
| xnli_en       | 0.51   |0.49  |0.041 |
| xnli_ru       | 0.71   |0.62 |0.55 |
| xwinograd_ru  | 0.66   |0.51 |0.54 |

Citation

@MISC{AlexWortega/flan_translated_300k,
    author  = {Pavel Ilin, Ksenia Zolian,Ilya kuleshov, Egor Kokush, Aleksandr Nikolich},
    title   = {Russian Flan translated},
    url     = {https://huggingface.co/datasets/AlexWortega/flan_translated_300k},
    year    = 2023
}
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AlexWortega/FlanFred