The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets library). If this is not possible, please
open a discussion
for direct help.
MediaSum dataset for summarization
Summarization dataset copied from MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization
This dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:
"ccdv/mediasum": ("document", "summary")
Configs
4 possibles configs:
robertawill concatenate documents with "</s>"newlinewill concatenate documents with "\n"bertwill concatenate documents with "[SEP]"listwill return the list of documents instead of a single string
Add _prepended to config name to prepend the speaker name before each dialogue: speaker: text
Default is roberta_prepended (compatible with BART).
Data Fields
id: paper iddocument: a string/list containing the body of a set of documentssummary: a string containing the abstract of the set
Data Splits
This dataset has 3 splits: train, validation, and test. \
| Dataset Split | Number of Instances |
|---|---|
| Train | 443596 |
| Validation | 10000 |
| Test | 10000 |
Cite original article
@article{zhu2021mediasum,
title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
journal={arXiv preprint arXiv:2103.06410},
year={2021}
}
- Downloads last month
- 94