Datasets:
Tasks:
Summarization
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: article
dtype: string
- name: step_headers
dtype: string
splits:
- name: train
num_bytes: 315275236
num_examples: 35775
- name: test
num_bytes: 17584216
num_examples: 2000
- name: validation
num_bytes: 17880851
num_examples: 2000
download_size: 194202865
dataset_size: 350740303
license:
- unknown
task_categories:
- summarization
language:
- en
multilinguality:
- monolingual
tags:
- abstractive-summarization
- wiki
- abstractive
pretty_name: 'WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation'
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: wikisum
wikisum
Dataset Description
- Homepage: https://registry.opendata.aws/wikisum/
- Repository: https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikisum
- Paper: Generating Wikipedia by Summarizing Long Sequences
- Leaderboard: [More Information Needed]
- Point of Contact: nachshon