bart-base-samsum
This model was obtained by fine-tuning facebook/bart-base
on Samsum dataset.
Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-base-samsum")
conversation = '''Jeff: Can I train a π€ Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
summarizer(conversation)
Training procedure
- Colab notebook: https://colab.research.google.com/drive/1RInRjLLso9E2HG_xjA6j8JO3zXzSCBRF?usp=sharing
Results
key | value |
---|---|
eval_rouge1 | 46.6619 |
eval_rouge2 | 23.3285 |
eval_rougeL | 39.4811 |
eval_rougeLsum | 43.0482 |
test_rouge1 | 44.9932 |
test_rouge2 | 21.7286 |
test_rougeL | 38.1921 |
test_rougeLsum | 41.2672 |
- Downloads last month
- 20
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train lidiya/bart-base-samsum
Space using lidiya/bart-base-samsum 1
Evaluation results
- Validation ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported46.662
- Validation ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported23.328
- Validation ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported39.481
- Test ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported44.993
- Test ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported21.729
- Test ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarizationself-reported38.192
- ROUGE-1 on samsumtest set verified45.015
- ROUGE-2 on samsumtest set verified21.686
- ROUGE-L on samsumtest set verified38.173
- ROUGE-LSUM on samsumtest set verified41.279