Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
1K - 10K
License:
unlfs
Browse files- .gitattributes +0 -0
- README.md +75 -3
- WIESP2022-NER-DEV-sample-predictions.jsonl +0 -0
- WIESP2022-NER-DEV.jsonl +0 -0
- WIESP2022-NER-TRAINING.jsonl +0 -0
- WIESP2022-NER-VALIDATION-NO-LABELS.jsonl +0 -0
- ner_tags.json +1 -3
- scoring-scripts/compute_MCC.py +31 -3
- scoring-scripts/compute_seqeval.py +49 -3
.gitattributes
ADDED
File without changes
|
README.md
CHANGED
@@ -1,3 +1,75 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: 'WIESP2022-NER'
|
13 |
+
size_categories:
|
14 |
+
- 1K<n<10K
|
15 |
+
source_datasets: []
|
16 |
+
task_categories:
|
17 |
+
- token-classification
|
18 |
+
task_ids:
|
19 |
+
- named-entity-recognition
|
20 |
+
---
|
21 |
+
# Dataset for the first <a href="https://ui.adsabs.harvard.edu/WIESP/" style="color:blue">Workshop on Information Extraction from Scientific Publications (WIESP/2022)</a>.
|
22 |
+
**(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)**
|
23 |
+
|
24 |
+
## Dataset Description
|
25 |
+
Datasets with text fragments from astrophysics papers, provided by the [NASA Astrophysical Data System](https://ui.adsabs.harvard.edu/) with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
|
26 |
+
Datasets are in JSON Lines format (each line is a json dictionary).
|
27 |
+
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the [IOB2 syntax]("https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)")
|
28 |
+
|
29 |
+
Each entry consists of a dictionary with the following keys:
|
30 |
+
- `"unique_id"`: a unique identifier for this data sample. Must be included in the predictions.
|
31 |
+
- `"tokens"`: the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
|
32 |
+
- `"ner_tags"`: the list of NER tags (in IOB2 format)
|
33 |
+
|
34 |
+
The following keys are not strictly needed by the participants:
|
35 |
+
- `"ner_ids"`: the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
|
36 |
+
- `"label_studio_id"`, `"section"`, `"bibcode"`: references for internal NASA/ADS use.
|
37 |
+
|
38 |
+
## Instructions for Workshop participants:
|
39 |
+
How to load the data:
|
40 |
+
(assuming `./WIESP2022-NER-DEV.jsonl` is in the current directory, change as needed)
|
41 |
+
- python (as list of dictionaries):
|
42 |
+
```
|
43 |
+
import json
|
44 |
+
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
|
45 |
+
wiesp_dev_json = [json.loads(l) for l in list(f)]
|
46 |
+
```
|
47 |
+
- into Huggingface (as a Huggingface Dataset):
|
48 |
+
```
|
49 |
+
from datasets import Dataset
|
50 |
+
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")
|
51 |
+
```
|
52 |
+
(NOTE: loading from the Huggingface Dataset Hub directly does not work. You need to clone the repository locally.)
|
53 |
+
|
54 |
+
How to compute your scores on the training data:
|
55 |
+
1. format your predictions as a list of dictionaries, each with the same `"unique_id"` and `"tokens"` keys from the dataset, as well as the list of predicted NER tags under the `"pred_ner_tags"` key (see `WIESP2022-NER-DEV-sample-predictions.jsonl` for an example).
|
56 |
+
2. pass the references and predictions datasets to the `compute_MCC()` and `compute_seqeval()` functions (from the `.py` files with the same names).
|
57 |
+
|
58 |
+
Requirement to run the scoring scripts:
|
59 |
+
[NumPy](https://numpy.org/install/)
|
60 |
+
[scikit-learn](https://scikit-learn.org/stable/install.html)
|
61 |
+
[seqeval](https://github.com/chakki-works/seqeval#installation)
|
62 |
+
|
63 |
+
To get scores on the validation data, zip your predictions file (a single `.jsonl' file formatted following the same instructions as above) and upload the `.zip` file to the [Codalabs](https://codalab.lisn.upsaclay.fr/competitions/5062) competition.
|
64 |
+
|
65 |
+
## File list
|
66 |
+
```
|
67 |
+
├── WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
|
68 |
+
├── WIESP2022-NER-DEV.jsonl : 20 samples for development.
|
69 |
+
├── WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
|
70 |
+
├── WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
|
71 |
+
├── README.MD: this file.
|
72 |
+
└── scoring-scripts/ : scripts used to evaluate submissions.
|
73 |
+
├── compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
|
74 |
+
└── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.
|
75 |
+
```
|
WIESP2022-NER-DEV-sample-predictions.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-DEV.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-TRAINING.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
WIESP2022-NER-VALIDATION-NO-LABELS.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
ner_tags.json
CHANGED
@@ -1,3 +1 @@
|
|
1 |
-
|
2 |
-
oid sha256:9fea87e4946d4916de22d1c78a13e4e444a8454ba4b4cca713137df25ad21c08
|
3 |
-
size 1239
|
|
|
1 |
+
{"B-Archive": 0, "B-CelestialObject": 1, "B-CelestialObjectRegion": 2, "B-CelestialRegion": 3, "B-Citation": 4, "B-Collaboration": 5, "B-ComputingFacility": 6, "B-Database": 7, "B-Dataset": 8, "B-EntityOfFutureInterest": 9, "B-Event": 10, "B-Fellowship": 11, "B-Formula": 12, "B-Grant": 13, "B-Identifier": 14, "B-Instrument": 15, "B-Location": 16, "B-Mission": 17, "B-Model": 18, "B-ObservationalTechniques": 19, "B-Observatory": 20, "B-Organization": 21, "B-Person": 22, "B-Proposal": 23, "B-Software": 24, "B-Survey": 25, "B-Tag": 26, "B-Telescope": 27, "B-TextGarbage": 28, "B-URL": 29, "B-Wavelength": 30, "I-Archive": 31, "I-CelestialObject": 32, "I-CelestialObjectRegion": 33, "I-CelestialRegion": 34, "I-Citation": 35, "I-Collaboration": 36, "I-ComputingFacility": 37, "I-Database": 38, "I-Dataset": 39, "I-EntityOfFutureInterest": 40, "I-Event": 41, "I-Fellowship": 42, "I-Formula": 43, "I-Grant": 44, "I-Identifier": 45, "I-Instrument": 46, "I-Location": 47, "I-Mission": 48, "I-Model": 49, "I-ObservationalTechniques": 50, "I-Observatory": 51, "I-Organization": 52, "I-Person": 53, "I-Proposal": 54, "I-Software": 55, "I-Survey": 56, "I-Tag": 57, "I-Telescope": 58, "I-TextGarbage": 59, "I-URL": 60, "I-Wavelength": 61, "O": 62}
|
|
|
|
scoring-scripts/compute_MCC.py
CHANGED
@@ -1,3 +1,31 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from sklearn.metrics import matthews_corrcoef
|
2 |
+
import numpy as np
|
3 |
+
def compute_MCC_jsonl(references_jsonl, predictions_jsonl, ref_col='ner_tags', pred_col='pred_ner_tags'):
|
4 |
+
'''
|
5 |
+
Computes the Matthews correlation coeff between two datasets in jsonl format (list of dicts each with same keys).
|
6 |
+
Sorts the datasets by 'unique_id' and verifies that the tokens match.
|
7 |
+
'''
|
8 |
+
# reverse the dict
|
9 |
+
ref_dict = {k:[e[k] for e in references_jsonl] for k in references_jsonl[0].keys()}
|
10 |
+
pred_dict = {k:[e[k] for e in predictions_jsonl] for k in predictions_jsonl[0].keys()}
|
11 |
+
|
12 |
+
# sort by unique_id
|
13 |
+
ref_idx = np.argsort(ref_dict['unique_id'])
|
14 |
+
pred_idx = np.argsort(pred_dict['unique_id'])
|
15 |
+
ref_ner_tags = np.array(ref_dict[ref_col], dtype=object)[ref_idx]
|
16 |
+
pred_ner_tags = np.array(pred_dict[pred_col], dtype=object)[pred_idx]
|
17 |
+
ref_tokens = np.array(ref_dict['tokens'], dtype=object)[ref_idx]
|
18 |
+
pred_tokens = np.array(pred_dict['tokens'], dtype=object)[pred_idx]
|
19 |
+
|
20 |
+
# check that tokens match
|
21 |
+
for t1,t2 in zip(ref_tokens, pred_tokens):
|
22 |
+
assert(t1==t2)
|
23 |
+
|
24 |
+
# the lists have to be flattened
|
25 |
+
flat_ref_tags = np.concatenate(ref_ner_tags)
|
26 |
+
flat_pred_tags = np.concatenate(pred_ner_tags)
|
27 |
+
|
28 |
+
mcc_score = matthews_corrcoef(y_true=flat_ref_tags,
|
29 |
+
y_pred=flat_pred_tags)
|
30 |
+
|
31 |
+
return(mcc_score)
|
scoring-scripts/compute_seqeval.py
CHANGED
@@ -1,3 +1,49 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from seqeval.metrics import classification_report, f1_score, precision_score, recall_score, accuracy_score
|
2 |
+
from seqeval.scheme import IOB2
|
3 |
+
import numpy as np
|
4 |
+
def compute_seqeval_jsonl(references_jsonl, predictions_jsonl, ref_col='ner_tags', pred_col='pred_ner_tags'):
|
5 |
+
'''
|
6 |
+
Computes the seqeval scores between two datasets loaded from jsonl (list of dicts with same keys).
|
7 |
+
Sorts the datasets by 'unique_id' and verifies that the tokens match.
|
8 |
+
'''
|
9 |
+
# extract the tags and reverse the dict
|
10 |
+
ref_dict = {k:[e[k] for e in references_jsonl] for k in references_jsonl[0].keys()}
|
11 |
+
pred_dict = {k:[e[k] for e in predictions_jsonl] for k in predictions_jsonl[0].keys()}
|
12 |
+
|
13 |
+
# sort by unique_id
|
14 |
+
ref_idx = np.argsort(ref_dict['unique_id'])
|
15 |
+
pred_idx = np.argsort(pred_dict['unique_id'])
|
16 |
+
ref_ner_tags = np.array(ref_dict[ref_col], dtype=object)[ref_idx]
|
17 |
+
pred_ner_tags = np.array(pred_dict[pred_col], dtype=object)[pred_idx]
|
18 |
+
ref_tokens = np.array(ref_dict['tokens'], dtype=object)[ref_idx]
|
19 |
+
pred_tokens = np.array(pred_dict['tokens'], dtype=object)[pred_idx]
|
20 |
+
|
21 |
+
# check that tokens match
|
22 |
+
assert((ref_tokens==pred_tokens).all())
|
23 |
+
|
24 |
+
|
25 |
+
# get report
|
26 |
+
report = classification_report(y_true=ref_ner_tags, y_pred=pred_ner_tags,
|
27 |
+
scheme=IOB2, output_dict=True,
|
28 |
+
)
|
29 |
+
|
30 |
+
# extract values we care about
|
31 |
+
report.pop("macro avg")
|
32 |
+
report.pop("weighted avg")
|
33 |
+
overall_score = report.pop("micro avg")
|
34 |
+
|
35 |
+
seqeval_results = {
|
36 |
+
type_name: {
|
37 |
+
"precision": score["precision"],
|
38 |
+
"recall": score["recall"],
|
39 |
+
"f1": score["f1-score"],
|
40 |
+
"suport": score["support"],
|
41 |
+
}
|
42 |
+
for type_name, score in report.items()
|
43 |
+
}
|
44 |
+
seqeval_results["overall_precision"] = overall_score["precision"]
|
45 |
+
seqeval_results["overall_recall"] = overall_score["recall"]
|
46 |
+
seqeval_results["overall_f1"] = overall_score["f1-score"]
|
47 |
+
seqeval_results["overall_accuracy"] = accuracy_score(y_true=ref_ner_tags, y_pred=pred_ner_tags)
|
48 |
+
|
49 |
+
return(seqeval_results)
|