Uploaded model
- Developed by: Vicky
- License: mit
- Finetuned from model : Bart summarization
Inference
pip install transformers
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Facebook-Bart-Qna")
model = AutoModelForSeq2SeqLM.from_pretrained("Mr-Vicky-01/Facebook-Bart-Qna")
def generate_answer(text):
inputs = tokenizer([text], return_tensors='pt', truncation=True)
summary_ids = model.generate(inputs['input_ids'], max_length=512)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
text_to_summarize = """Please answer this question: What is Artifical Intelligence?"""
summary = generate_answer(text_to_summarize)
print(summary)
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.