Datasets:
metadata
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
This is a copy of the MS^2 dataset, except the input source documents of its validation
split have been replaced by a sparse retriever. The retrieval pipeline used:
- query: The
background
field of each example - corpus: The union of all documents in the
train
,validation
andtest
splits. A document is the concatenation of thetitle
andabstract
. - retriever: BM25 via PyTerrier with default settings
- top-k strategy:
"mean"
, i.e. the number of documents retrieved,k
, is set as the mean number of documents seen across examples in this dataset, in this casek==17
Retrieval results on the train
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.4333 | 0.2163 | 0.2051 | 0.2197 |
Retrieval results on the validation
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.3780 | 0.1827 | 0.1815 | 0.1792 |
Retrieval results on the test
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.3928 | 0.1898 | 0.1951 | 0.1820 |