t5_interpreter / README.md
koziev ilya
adding usage example
702b4da
|
raw
history blame
905 Bytes
metadata
license: lgpl-3.0

t5_interpreter

A rut5-based model for incomplete utterance restoration, spellchecking and text normalization for dialogue utterances.

Read more about the task here.

Usage example

import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer

model_name = 'inkoziev/t5_interpreter'
tokenizer = T5Tokenizer.from_pretrained(model_name,)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained(model_name)
model.eval()

t5_input = '- Тебя как зовут?\n- Мальвина #'
input_ids = tokenizer(t5_input, return_tensors='pt').input_ids
out_ids = model.generate(input_ids=input_ids, max_length=40, eos_token_id=tokenizer.eos_token_id, early_stopping=True)
t5_output = tokenizer.decode(out_ids[0][1:])
print(t5_output)