Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse filesadded citation bibtext
README.md
CHANGED
@@ -69,7 +69,7 @@ Requirement to run the scoring scripts:
|
|
69 |
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
|
70 |
|
71 |
## File list
|
72 |
-
```
|
73 |
βββ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
|
74 |
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
75 |
βββ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
@@ -82,4 +82,25 @@ To get scores on the validation data, zip your predictions file (a single `.json
|
|
82 |
βββ scoring-scripts/ : scripts used to evaluate submissions.
|
83 |
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
84 |
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
```
|
|
|
69 |
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
|
70 |
|
71 |
## File list
|
72 |
+
```
|
73 |
βββ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
|
74 |
βββ WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
75 |
βββ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
|
|
82 |
βββ scoring-scripts/ : scripts used to evaluate submissions.
|
83 |
βββ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
84 |
βββ compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|
85 |
+
```
|
86 |
+
|
87 |
+
## Cite as
|
88 |
+
[Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL)](https://aclanthology.org/2022.wiesp-1.1) (Grezes et al., WIESP 2022)
|
89 |
+
|
90 |
+
```python
|
91 |
+
@inproceedings{grezes-etal-2022-overview,
|
92 |
+
title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
|
93 |
+
author = "Grezes, Felix and
|
94 |
+
Blanco-Cuaresma, Sergi and
|
95 |
+
Allen, Thomas and
|
96 |
+
Ghosal, Tirthankar",
|
97 |
+
booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
|
98 |
+
month = "nov",
|
99 |
+
year = "2022",
|
100 |
+
address = "Online",
|
101 |
+
publisher = "Association for Computational Linguistics",
|
102 |
+
url = "https://aclanthology.org/2022.wiesp-1.1",
|
103 |
+
pages = "1--7",
|
104 |
+
abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
|
105 |
+
}
|
106 |
```
|