Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
0665245
·
1 Parent(s): ccaa997

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,40 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
38
- test.jsonl filter=lfs diff=lfs merge=lfs -text
39
- train.jsonl filter=lfs diff=lfs merge=lfs -text
40
- validation.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,209 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - cc-by-sa-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1K<n<10K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - summarization
18
- task_ids:
19
- - summarization
20
- - query-based-summarization
21
- ---
22
-
23
- # Dataset Card for answersumm
24
-
25
- ## Table of Contents
26
- - [Dataset Description](#dataset-description)
27
- - [Dataset Summary](#dataset-summary)
28
- - [Supported Tasks](#supported-tasks-and-leaderboards)
29
- - [Languages](#languages)
30
- - [Dataset Structure](#dataset-structure)
31
- - [Data Instances](#data-instances)
32
- - [Data Fields](#data-instances)
33
- - [Data Splits](#data-instances)
34
- - [Dataset Creation](#dataset-creation)
35
- - [Curation Rationale](#curation-rationale)
36
- - [Source Data](#source-data)
37
- - [Annotations](#annotations)
38
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
- - [Considerations for Using the Data](#considerations-for-using-the-data)
40
- - [Social Impact of Dataset](#social-impact-of-dataset)
41
- - [Discussion of Biases](#discussion-of-biases)
42
- - [Other Known Limitations](#other-known-limitations)
43
- - [Additional Information](#additional-information)
44
- - [Dataset Curators](#dataset-curators)
45
- - [Licensing Information](#licensing-information)
46
- - [Citation Information](#citation-information)
47
-
48
- ## Dataset Description
49
-
50
- - **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm
51
- - **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474)
52
- - **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com)
53
-
54
- ### Dataset Summary
55
-
56
- The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers.
57
- The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set.
58
-
59
-
60
- ### Languages
61
-
62
- The text in the dataset is in English.
63
-
64
- ## Dataset Structure
65
-
66
- ### Data Instances
67
-
68
- A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata.
69
-
70
- An example from the AnswerSumm test set looks as follows:
71
- ```json
72
- {
73
- "example_id": 9_24,
74
- "annotator_id": [1],
75
- "question": {
76
- "author": "gaming.stackexchange.com/users/11/Jeffrey",
77
- "forum": "gaming.stackexchange.com",
78
- "link": "gaming.stackexchange.com/questions/1",
79
- "question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?",
80
- "question_tags": "\<team-fortress-2\>",
81
- "title": "What is a good strategy to deal with lots of engineers turtling on the other team?"
82
- },
83
- "answers": [
84
- {
85
- "answer_details": {
86
- "author": "gaming.stackexchange.com/users/44/Corv1nus",
87
- "score": 49
88
- }
89
- "sents": [
90
- "text": "Lots of medics with lots of ubers on high-damage-dealing classes."
91
- "label": [0],
92
- "label_summ": [0],
93
- "cluster_id": [[-1]],
94
- ]
95
- ...
96
- },
97
- ...
98
- ]
99
- "summaries": [
100
- [
101
- "Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.",
102
- "Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..."
103
- ]
104
- ]
105
- "cluster_summaries":[
106
- "Demomen are best against a sentry farm.",
107
- "Heavies or pyros can also be effective.",
108
- ...
109
- ]
110
- }
111
-
112
- ```
113
-
114
-
115
-
116
- ### Data Fields
117
-
118
- - question: contains metadata about the question and forum
119
- - question: the body of the question post
120
- - title: the title of the question post
121
- - question_tags: user-provided question tags
122
- - link: link to the original question
123
- - author: link to the author's user page (as requested by StackExchange's attribution policy)
124
-
125
- - answers: list of sentence-tokenized answers
126
- - answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score)
127
- - sents: sentences that compose the answer
128
- - text: the sentence text
129
- - label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question.
130
- - label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`)
131
- - cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers.
132
-
133
- - summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction.
134
-
135
- - annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread.
136
-
137
- - mismatch_info: a dict of any issues in processing the excel files on which annotations were completed.
138
- - rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster.
139
- - cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig.
140
-
141
- ### Data Splits
142
-
143
- The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively.
144
-
145
- ## Dataset Creation
146
-
147
- ### Curation Rationale
148
-
149
- AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization.
150
-
151
- ### Source Data
152
-
153
- #### Initial Data Collection and Normalization
154
-
155
- The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers.
156
-
157
- #### Who are the source language producers?
158
-
159
- The language producers are the users of the StackExchange forums sampled.
160
-
161
- ### Annotations
162
-
163
- #### Annotation process
164
-
165
- Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection.
166
-
167
- #### Who are the annotators?
168
-
169
- The annotators are professional linguists who were obtained through an internal contractor.
170
-
171
- ### Personal and Sensitive Information
172
-
173
- We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information.
174
-
175
- ## Considerations for Using the Data
176
-
177
- ### Social Impact of Dataset
178
-
179
- The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective.
180
-
181
- ### Discussion of Biases
182
-
183
- While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns.
184
- We also note that this dataset is limited in its monolingual coverage.
185
-
186
-
187
- ## Additional Information
188
-
189
- ### Dataset Curators
190
-
191
- The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook.
192
-
193
- ### Licensing Information
194
-
195
- The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange).
196
-
197
- ### Citation Information
198
-
199
- ```bibtex
200
- @misc{fabbri-etal-2022-answersumm,
201
- title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization},
202
- author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab },
203
- year={2022},
204
- eprint={2111.06474},
205
- archivePrefix={arXiv},
206
- primaryClass={cs.CL},
207
- url={https://arxiv.org/abs/2111.06474}
208
- }
209
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
validation.jsonl → alexfabbri--answersumm/json-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ffa9fdd0d1cc612042816af9b9171c0dad6d6e99de23501316df8e5007e8b076
3
- size 4428855
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfe829184cdf9f14685cc80a5b3429acffa2b926929f176accbb6c9e593dd6f9
3
+ size 3178764
test.jsonl → alexfabbri--answersumm/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:123107b9c4cd9620ff3611aa3756cdefa0ea9498465b8cd9390fb4cc3b5b08bf
3
- size 8755413
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a2427112c7758a1fcd51f21624dcd40a31bb07c7f9ff8b2cb3737737499ae3a
3
+ size 9479463
train.jsonl → alexfabbri--answersumm/json-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a318d34fc2888a8e8c58d7c38e60005bf741beaacd78e99ce7433d1aff09e7b6
3
- size 24817274
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e882f54e56164bf34b445a71e36fac6ade1d86e361b09953089e52b1faad0a1
3
+ size 1711439