This is fine-tuned form of google/mt5-base model used as Russian text summarizer, trained on ~50k samples' dataset. Updates are coming soon. Target is to improve the quality, length and accuracy.
Example Usage:
model_name = "sarahai/ru-sum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
device = torch.device("cpu") #if you are using cpu
input_text = "текст на русском" #your input in russian
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids, max_length=100, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True) #change according to your preferences
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)
References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sarahai/ru-sum
Base model
google/mt5-base