Datasets:
GEM
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Sebastian Gehrmann commited on
Commit
5ca8de6
1 Parent(s): 59b1700
Files changed (1) hide show
  1. wiki_auto_asset_turk.json +3 -0
wiki_auto_asset_turk.json CHANGED
@@ -43,6 +43,9 @@
43
  "structure-splits": "In WikiAuto, which is used as training and validation set, the following splits are provided: \n\n| | Tain | Dev | Test |\n| ----- | ------ | ----- | ---- |\n| Total sentence pairs | 373801 | 73249 | 118074 |\n| Aligned sentence pairs | 1889 | 346 | 677 |\n\nASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 20000 | 3590 | 23590 |\n\nThe test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\nTURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.\n\nEach input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 16000 | 2872 | 18872 |\n\n\nThere are 21.29 tokens per reference on average.\n\n",
44
  "structure-splits-criteria": "In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ",
45
  "structure-outlier": "n/a"
 
 
 
46
  }
47
  },
48
  "curation": {
 
43
  "structure-splits": "In WikiAuto, which is used as training and validation set, the following splits are provided: \n\n| | Tain | Dev | Test |\n| ----- | ------ | ----- | ---- |\n| Total sentence pairs | 373801 | 73249 | 118074 |\n| Aligned sentence pairs | 1889 | 346 | 677 |\n\nASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model.\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 20000 | 3590 | 23590 |\n\nThe test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random.\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\nTURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training.\n\nEach input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences.\n\n| | Dev | Test | Total |\n| ----- | ------ | ---- | ----- |\n| Input Sentences | 2000 | 359 | 2359 |\n| Reference Simplifications | 16000 | 2872 | 18872 |\n\n\nThere are 21.29 tokens per reference on average.\n\n",
44
  "structure-splits-criteria": "In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ",
45
  "structure-outlier": "n/a"
46
+ },
47
+ "what": {
48
+ "dataset": "WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting). "
49
  }
50
  },
51
  "curation": {