Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
parquet-converter commited on
Commit
5919cb2
·
1 Parent(s): 93f5485

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,52 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
52
- *.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
CNN-DM/controlled-text-reduction-dataset-train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5b4dcef2986467cd8f4cbae648b33f38fb97bf6553b08b52e69d7d4e72e3414
3
+ size 303423465
CNN-DM/controlled-text-reduction-dataset-train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b622f7c524a4b94204c5b13f5d9120958fc853e25cbf75ac0879ea78f84c17e5
3
+ size 294732574
CNN-DM/controlled-text-reduction-dataset-train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bca96d08496602764d2a0e50103879c43f9cf2c3d4f04f440bd0ae5307c2ab47
3
+ size 184571199
Controlled-Text-Reduction-dataset.py DELETED
@@ -1,193 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """A Dataset loading script for the Controlled Text Reduction dataset."""
16
-
17
-
18
- import datasets
19
- from pathlib import Path
20
- from typing import List
21
- import pandas as pd
22
- from dataclasses import dataclass
23
-
24
- _CITATION = """"""
25
- # _CITATION = """\
26
- # @inproceedings{roit2020controlled,
27
- # title={Controlled Crowdsourcing for High-Quality QA-SRL Annotation},
28
- # author={Roit, Paul and Klein, Ayal and Stepanov, Daniela and Mamou, Jonathan and Michael, Julian and Stanovsky, Gabriel and Zettlemoyer, Luke and Dagan, Ido},
29
- # booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
30
- # pages={7008--7013},
31
- # year={2020}
32
- # }
33
- # """
34
-
35
-
36
- _DESCRIPTION = """\
37
- The dataset contains document-summary pairs with document spans (referred to as "highlights"), indicating the "pre-selected" spans that lead to the creation of the summary.
38
- The evaluation and test datasets were constructed via controlled crowdsourcing.
39
- The train datasets were automatically generated using the summary-source proposition-level alignment model SuperPAL (Ernst et al., 2021).
40
- """
41
-
42
- _HOMEPAGE = "https://github.com/lovodkin93/Controlled_Text_Reduction/tree/main"
43
-
44
- _LICENSE = """MIT License
45
- Copyright (c) 2022 lovodkin93
46
- Permission is hereby granted, free of charge, to any person obtaining a copy
47
- of this software and associated documentation files (the "Software"), to deal
48
- in the Software without restriction, including without limitation the rights
49
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
50
- copies of the Software, and to permit persons to whom the Software is
51
- furnished to do so, subject to the following conditions:
52
- The above copyright notice and this permission notice shall be included in all
53
- copies or substantial portions of the Software.
54
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
55
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
56
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
57
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
58
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
59
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
60
- SOFTWARE."""
61
-
62
-
63
-
64
- _URLs = {
65
- "DUC-2001-2002": {
66
- "train": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/train_DUC-2001-2002.csv",
67
- "dev": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/dev_DUC-2001-2002.csv",
68
- "test": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/test_DUC-2001-2002.csv",
69
- },
70
- "CNN-DM": {
71
- "train": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/train_CNNDM.csv",
72
- "dev": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/dev_DUC-2001-2002.csv",
73
- "test": "https://media.githubusercontent.com/media/lovodkin93/Controlled_Text_Reduction/main/data/test_DUC-2001-2002.csv",
74
- },
75
- }
76
-
77
-
78
- @dataclass
79
- class ControlledTextReductionConfig(datasets.BuilderConfig):
80
- """ Allow the loader to re-distribute the original dev and test splits between train, dev and test. """
81
- data_source: str = "DUC-2001-2002" # "DUC-2001-2002" or "CNN-DM"
82
-
83
-
84
-
85
-
86
-
87
- class ControlledTectReduction(datasets.GeneratorBasedBuilder):
88
- """Controlled Text Reduction: dataset for the Controlled Text Reduction task ().
89
- Each data point consists of a document, a summary, and a list of spans of the document that are the pre-selected content whose summary is the summary"""
90
-
91
-
92
- VERSION = datasets.Version("1.0.0")
93
-
94
- BUILDER_CONFIG_CLASS = ControlledTextReductionConfig
95
-
96
- BUILDER_CONFIGS = [
97
- ControlledTextReductionConfig(
98
- name="DUC-2001-2002",
99
- version=VERSION,
100
- description="This provides the Controlled Text Reduction dataset extracted from the DUC 2001-2002 Single Document Summarization benchmark",
101
- data_source="DUC-2001-2002"
102
- ),
103
- ControlledTextReductionConfig(
104
- name="CNN-DM",
105
- version=VERSION,
106
- description="This provides the Controlled Text Reduction dataset extracted from the CNN-DM dataset (the train split)",
107
- data_source="CNN-DM"
108
- )
109
- ]
110
-
111
- DEFAULT_CONFIG_NAME = (
112
- "DUC-2001-2002" # It's not mandatory to have a default configuration. Just use one if it make sense.
113
- )
114
-
115
- def _info(self):
116
- features = datasets.Features(
117
- {
118
- "doc_text": datasets.Value("string"),
119
- "summary_text": datasets.Value("string"),
120
- "highlight_spans": datasets.Value("string"),
121
- }
122
- )
123
- return datasets.DatasetInfo(
124
- # This is the description that will appear on the datasets page.
125
- description=_DESCRIPTION,
126
- # This defines the different columns of the dataset and their types
127
- features=features, # Here we define them above because they are different between the two configurations
128
- # If there's a common (input, target) tuple from the features,
129
- # specify them here. They'll be used if as_supervised=True in
130
- # builder.as_dataset.
131
- supervised_keys=None,
132
- # Homepage of the dataset for documentation
133
- homepage=_HOMEPAGE,
134
- # License for the dataset if available
135
- license=_LICENSE,
136
- # Citation for the dataset
137
- citation=_CITATION,
138
- )
139
-
140
-
141
- def _split_generators(self, dl_manager: datasets.utils.download_manager.DownloadManager):
142
- """Returns SplitGenerators."""
143
-
144
- URLs = _URLs[self.config.data_source]
145
- # Download and prepare all files - keep same structure as URLs
146
- corpora = {section: Path(dl_manager.download_and_extract(URLs[section]))
147
- for section in URLs}
148
-
149
- if self.config.data_source=="CNN-DM":
150
- return [
151
- datasets.SplitGenerator(
152
- name=datasets.Split.TRAIN,
153
- # These kwargs will be passed to _generate_examples
154
- gen_kwargs={
155
- "filepath": corpora["train"]
156
- },
157
- )
158
- ]
159
- else:
160
-
161
- return [
162
- datasets.SplitGenerator(
163
- name=datasets.Split.TRAIN,
164
- # These kwargs will be passed to _generate_examples
165
- gen_kwargs={
166
- "filepath": corpora["train"]
167
- },
168
- ),
169
- datasets.SplitGenerator(
170
- name=datasets.Split.VALIDATION,
171
- # These kwargs will be passed to _generate_examples
172
- gen_kwargs={
173
- "filepath": corpora["dev"]
174
- },
175
- ),
176
- datasets.SplitGenerator(
177
- name=datasets.Split.TEST,
178
- # These kwargs will be passed to _generate_examples
179
- gen_kwargs={
180
- "filepath": corpora["test"]
181
- },
182
- ),
183
- ]
184
-
185
-
186
- def _generate_examples(self, filepath: List[str]):
187
-
188
- """ Yields Controlled Text Reduction examples from a csv file. Each instance contains the document, the summary and the pre-selected spans."""
189
-
190
- # merge annotations from sections
191
- df = pd.read_csv(filepath)
192
- for counter, dic in enumerate(df.to_dict('records')):
193
- yield counter, dic
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
DUC-2001-2002/controlled-text-reduction-dataset-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d0e6d337e0a374dc95da4c340e0f15acba3c663e4bca2f65c88efa2d121bc5f
3
+ size 644223
DUC-2001-2002/controlled-text-reduction-dataset-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baf50588e9dcfc6be7b53b8fab6b5d9e7cfc302121abb4c903e28f71debae155
3
+ size 3225567
DUC-2001-2002/controlled-text-reduction-dataset-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83305fbd87e147882e1ae570143af82c387f67519c5b7304e4ae6b00c20d74af
3
+ size 219424
README.md DELETED
@@ -1,42 +0,0 @@
1
- # Controlled Text Reduction
2
-
3
- This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary.
4
- The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content.
5
-
6
- The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction).
7
-
8
- ### Format
9
-
10
- The dataset contains the following important features:
11
-
12
- * `doc_text` - the input text.
13
- * `summary_text` - the output text.
14
- * `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text).
15
-
16
- ```json
17
- {'doc_text': 'The motion picture industry\'s most coveted award...with 32.',
18
- 'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).',
19
- 'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'}
20
- ```
21
- where for each document-summary pair, we save the spans in the input document that lead to the summary.
22
-
23
- Notice that the dataset consists of two subsets:
24
- 1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test).
25
- 2. `CNN-DM` - which has a single split.
26
-
27
- Citation
28
- ========
29
- If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:
30
- ```
31
- @misc{https://doi.org/10.48550/arxiv.2210.13449,
32
- doi = {10.48550/ARXIV.2210.13449},
33
- url = {https://arxiv.org/abs/2210.13449},
34
- author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido},
35
- keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
36
- title = {Controlled Text Reduction},
37
- publisher = {arXiv},
38
- year = {2022},
39
- copyright = {Creative Commons Zero v1.0 Universal}
40
- }
41
-
42
- ```