File size: 1,728 Bytes
4b9a14e 4321a0b 7b86130 0847a2d 7b86130 bebed93 7b86130 bebed93 7b86130 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: mit
language:
- km
metrics:
- accuracy
base_model:
- facebook/mbart-large-50
library_name: transformers
datasets:
- kimleang123/khmer_question_answer
---
## How to use the model
Import model and tokenizer from transformer libray
```py
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("tykea/mBart-large-50-KQA")
model = AutoModelForSeq2SeqLM.from_pretrained("tykea/mBart-large-50-KQA")
```
Define function to take question and pass to the model
```py
import torch
#ask function for easier asking
def ask(custom_question):
# Tokenize the input
inputs = tokenizer(
f"qestion: {custom_question}",
return_tensors="pt",
truncation=True,
max_length=512,
padding="max_length"
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = {key: value.to(device) for key, value in inputs.items()}
model.eval()
with torch.no_grad():
outputs = model.generate(
input_ids=inputs["input_ids"],
max_length=50,
num_beams=4,
repetition_penalty=2.0,
early_stopping=True,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7,
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Question: {custom_question}")
print(f"Answer: {answer}")
```
Then call the function #ask function
```py
question = "ααΎααα’αΌαααΎααα
ααααααααΆ?"
ask(question)
#output
Question: ααΎααα’αΌαααΎααα
ααααααααΆ?
Answer: ααα’αΌαααΎααα
ααααααα
α·α
``` |