File size: 3,045 Bytes
1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e 1efbfb4 7eae93e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
base_model: google/pegasus-cnn_dailymail
model-index:
- name: pegasus-samsum
results: []
datasets:
- Samsung/samsum
language:
- en
metrics:
- rouge
pipeline_tag: summarization
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on
[SAMSum](https://huggingface.co/datasets/Samsung/samsum) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3839
# Intended uses & limitations
## Intended uses:
* Dialogue summarization (e.g., chat logs, meetings)
* Text summarization for conversational datasets
## Limitations:
* May struggle with very long conversations or non-dialogue text.
# Training procedure
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
## Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6026 | 0.5431 | 500 | 1.4875 |
| 1.4737 | 1.0861 | 1000 | 1.4040 |
| 1.4735 | 1.6292 | 1500 | 1.3839 |
### Test results
| rouge1 | rouge2 | rougeL | rougeLsum Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.427614 | 0.200571 | 0.340648 | 0.340738 |
## How to use
You can use this model with the transformers library for dialogue summarization. Here's an example in Python:
```python
from transformers import pipeline
import torch
device = 0 if torch.cuda.is_available() else -1
pipe = pipeline("summarization",
model="seddiktrk/pegasus-samsum",
device=device)
custom_dialogue = """\
Seddik: Hey, have you tried using PEGASUS for summarization?
John: Yeah, I just started experimenting with it last week!
Seddik: It's pretty powerful, especially for abstractive summaries.
John: I agree! The results are really impressive.
Seddik: I was thinking of using it for my next project. Want to collaborate?
John: Absolutely! We could make some awesome improvements together.
Seddik: Perfect, let's brainstorm ideas this weekend.
John: Sounds like a plan!
"""
# Summarize dialogue
gen_kwargs = {"length_penalty": 0.8, "num_beams":8, "max_length": 128}
print(pipe(custom_dialogue, **gen_kwargs)[0]["summary_text"])
```
Example Output
```
John started using PEG for summarization last week. Seddik is thinking of using it for his next project.
John and Seddik will brainstorm ideas this weekend.
```
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1 |