readme
Browse files
README.md
CHANGED
@@ -1,3 +1,60 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
|
6 |
+
# CSAbstruct
|
7 |
+
|
8 |
+
CSAbstruct was created as part of ["Pretrained Language Models for Sequential Sentence Classification"][1].
|
9 |
+
|
10 |
+
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][2] categories.
|
11 |
+
|
12 |
+
|
13 |
+
## Dataset Construction Details
|
14 |
+
|
15 |
+
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
|
16 |
+
The key difference between this dataset and [PUBMED-RCT][2] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
|
17 |
+
Therefore, there is more variety in writing styles in CSAbstruct.
|
18 |
+
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et al., 2018)][3].
|
19 |
+
Each sentence is annotated by 5 workers on the [Figure-eight platform][4], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
|
20 |
+
|
21 |
+
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
|
22 |
+
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
|
23 |
+
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
|
24 |
+
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
|
25 |
+
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
|
26 |
+
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
|
27 |
+
Compared with [PUBMED-RCT][2], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
|
28 |
+
|
29 |
+
## Dataset Statistics
|
30 |
+
|
31 |
+
| Statistic | Avg ± std |
|
32 |
+
|--------------------------|-------------|
|
33 |
+
| Doc length in sentences | 6.7 ± 1.99 |
|
34 |
+
| Sentence length in words | 21.8 ± 10.0 |
|
35 |
+
|
36 |
+
| Label | % in Dataset |
|
37 |
+
|---------------|--------------|
|
38 |
+
| `BACKGROUND` | 33% |
|
39 |
+
| `METHOD` | 32% |
|
40 |
+
| `RESULT` | 21% |
|
41 |
+
| `OBJECTIVE` | 12% |
|
42 |
+
| `OTHER` | 03% |
|
43 |
+
|
44 |
+
## Citation
|
45 |
+
|
46 |
+
If you use this dataset, please cite the following paper:
|
47 |
+
|
48 |
+
```
|
49 |
+
@inproceedings{Cohan2019EMNLP,
|
50 |
+
title={Pretrained Language Models for Sequential Sentence Classification},
|
51 |
+
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
|
52 |
+
year={2019},
|
53 |
+
booktitle={EMNLP},
|
54 |
+
}
|
55 |
+
```
|
56 |
+
|
57 |
+
[1]: https://aclanthology.org/D19-1383
|
58 |
+
[2]: https://arxiv.org/abs/1710.06071
|
59 |
+
[3]: https://aclanthology.org/N18-3011/
|
60 |
+
[4]: https://www.figure-eight.com/
|