Summarization
Transformers
PyTorch
English
bart
text2text-generation
conversational
seq2seq
bart large
Eval Results (legacy)
Instructions to use yashugupta786/bart_large_xsum_samsum_conv_summarizer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yashugupta786/bart_large_xsum_samsum_conv_summarizer with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="yashugupta786/bart_large_xsum_samsum_conv_summarizer") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("yashugupta786/bart_large_xsum_samsum_conv_summarizer") model = AutoModelForSeq2SeqLM.from_pretrained("yashugupta786/bart_large_xsum_samsum_conv_summarizer") - Notebooks
- Google Colab
- Kaggle
Usage
from transformers import pipeline
summarizer_pipe = pipeline("summarization", model="yashugupta786/bart_large_xsum_samsum_conv_summarizer")
conversation_data = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him π
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer_pipe(conversation_data)
Results
| key | value |
|---|---|
| eval_rouge1 | 54.3921 |
| eval_rouge2 | 29.8078 |
| eval_rougeL | 45.1543 |
| eval_rougeLsum | 49.942 |
| test_rouge1 | 53.3059 |
| test_rouge2 | 28.355 |
| test_rougeL | 44.0953 |
| test_rougeLsum | 48.9246 |
All the metric Rouge1,2,L are computed using precison and recall then computed the F measure for these Rouge recall= no of overlaping words/total no of referenced humman annotated words Rouge precision= no of overlaping words/total no of candidate machine predicted words
- Downloads last month
- 12
Space using yashugupta786/bart_large_xsum_samsum_conv_summarizer 1
Evaluation results
- Validation ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported54.392
- Validation ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported29.808
- Validation ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported45.154
- Test ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported53.306
- Test ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported28.355
- Test ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported44.095