{}
Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
- Developed by: Mudasir692
- Model type: transformer
- Language(s) (NLP): python
- License: MIT
- Finetuned from model [optional]: Peguses
Bias, Risks, and Limitations
Model might not generate coherent summary to large extent.
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer
Load the saved model and tokenizer
model_path = "peguses_chat_sum" device = torch.device("cpu")
Load the model and tokenizer from the saved directory
model = PegasusForConditionalGeneration.from_pretrained(model_path) tokenizer = PegasusTokenizer.from_pretrained(model_path)
Move the model to the correct device
model = model.to(device)
How to Get Started with the Model
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("Mudasir692/peguses_chat_sum") tokenizer = PegasusTokenizer.from_pretrained("Mudasir692/peguses_chat_sum") input_text = """ #Person1#: Hey Alice, congratulations on your promotion! #Person2#: Thank you so much! It means a lot to me. I’m still processing it, honestly. #Person1#: You totally deserve it. Your hard work finally paid off. Let’s celebrate this weekend. #Person2#: That sounds amazing. Dinner on me, okay? #Person1#: Sure! Just let me know where and when. Oh, by the way, did you tell your family? #Person2#: Yes, they were so excited. Mom’s already planning to bake a cake. #Person1#: That’s wonderful! I’ll bring a gift too. It’s such a big milestone for you. #Person2#: You’re the best. Thanks for always being so supportive. """ inputs = tokenizer(input_text, return_tensors="pt") model.eval() outputs = model.generate(**inputs, max_new_tokens=100) generated_summary = tokenizer.decode(outputs[0], skip_special_tokens=True) print("generated summary", generated_summary)