How to use the model
Import model and tokenizer from transformer libray
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("tykea/mBart-large-50-KQA")
model = AutoModelForSeq2SeqLM.from_pretrained("tykea/mBart-large-50-KQA")
Define function to take question and pass to the model
import torch
#ask function for easier asking
def ask(custom_question):
# Tokenize the input
inputs = tokenizer(
f"qestion: {custom_question}",
return_tensors="pt",
truncation=True,
max_length=512,
padding="max_length"
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = {key: value.to(device) for key, value in inputs.items()}
model.eval()
with torch.no_grad():
outputs = model.generate(
input_ids=inputs["input_ids"],
max_length=50,
num_beams=4,
repetition_penalty=2.0,
early_stopping=True,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Question: {custom_question}")
print(f"Answer: {answer}")
Then call the function #ask function
question = "ααΎααα’αΌαααΎααα
ααααααααΆ?"
ask(question)
#output
Question: ααΎααα’αΌαααΎααα
ααααααααΆ?
Answer: ααα’αΌαααΎααα
ααααααα
α·α
- Downloads last month
- 117
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tykea/mBart-large-50-KQA
Base model
facebook/mbart-large-50