File size: 2,735 Bytes
a93c43d 4dd3d49 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
---
# CSAbstruct
CSAbstruct was created as part of ["Pretrained Language Models for Sequential Sentence Classification"][1].
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][2] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][2] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et al., 2018)][3].
Each sentence is annotated by 5 workers on the [Figure-eight platform][4], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][2], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ± std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ± 1.99 |
| Sentence length in words | 21.8 ± 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://aclanthology.org/D19-1383
[2]: https://arxiv.org/abs/1710.06071
[3]: https://aclanthology.org/N18-3011/
[4]: https://www.figure-eight.com/
|