pegasus-x-base-synthsumm_open-16k
This is a text-to-text summarization model fine-tuned from pegasus-x-base on a dataset of long documents from various sources/domains and their synthetic summaries.
It performs surprisingly well as a general summarization model for its size. More details, a larger model, and the dataset will be released (as time permits).
Usage
It's recommended to use this model with beam search decoding. If interested, you can also use the textsum util package to have most of this abstracted out for you:
pip install -U textsum
then:
from textsum.summarize import Summarizer
model_name = "BEE-spoke-data/pegasus-x-base-synthsumm_open-16k"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)
- Downloads last month
- 201
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for BEE-spoke-data/pegasus-x-base-synthsumm_open-16k
Base model
google/pegasus-x-base