Edit model card

bart-large-xsum-samsum

This model is a fine-tuned version of facebook/bart-large-xsum on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 0.759
  • Rouge1: 54.3073
  • Rouge2: 29.0947
  • Rougel: 44.4676
  • Rougelsum: 49.895

Model description

This model tends to generate less verbose summaries compared to AdamCodd/bart-large-cnn-samsum, yet I find its quality to be superior (which is reflected in the metrics).

Intended uses & limitations

Suitable for summarizing dialogue-style text, it may not perform as well with other types of text formats.

from transformers import pipeline
summarizer = pipeline("summarization", model="AdamCodd/bart-large-xsum-samsum")

conversation = '''Emily: Hey Alex, have you heard about the new restaurant that opened downtown?
Alex: No, I haven't. What's it called?
Emily: It's called "Savory Bites." They say it has the best pasta in town.
Alex: That sounds delicious. When are you thinking of checking it out?
Emily: How about this Saturday? We can make it a dinner date.
Alex: Sounds like a plan, Emily. I'm looking forward to it.                                       
'''
result = summarizer(conversation)
print(result)

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1270
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 150
  • num_epochs: 1

Training results

key value
eval_rouge1 54.3073
eval_rouge2 29.0947
eval_rougeL 44.4676
eval_rougeLsum 49.895

Framework versions

  • Transformers 4.35.0
  • Accelerate 0.24.1
  • Datasets 2.14.6
  • Tokenizers 0.14.3

If you want to support me, you can here.

Downloads last month
7
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AdamCodd/bart-large-xsum-samsum

Collection including AdamCodd/bart-large-xsum-samsum

Evaluation results

  • Validation ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    54.307
  • Validation ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    29.095
  • Validation ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    44.468