Edit model card

pegasus-x-base-synthsumm_open-16k

Open In Colab

This is a text-to-text summarization model fine-tuned from pegasus-x-base on a dataset of long documents from various sources/domains and their synthetic summaries.

It performs surprisingly well as a general summarization model for its size. More details, a larger model, and the dataset will be released (as time permits).

Usage

It's recommended to use this model with beam search decoding. If interested, you can also use the textsum util package to have most of this abstracted out for you:

pip install -U textsum

then:

from textsum.summarize import Summarizer

model_name = "BEE-spoke-data/pegasus-x-base-synthsumm_open-16k"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)
Downloads last month
204
Safetensors
Model size
272M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BEE-spoke-data/pegasus-x-base-synthsumm_open-16k

Finetuned
(12)
this model

Space using BEE-spoke-data/pegasus-x-base-synthsumm_open-16k 1