Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
WIESP2022-NER / README.md
fgrezes's picture
Update README.md
cdb6315
metadata
annotations_creators:
  - expert-generated
language_creators:
  - found
language:
  - en
license:
  - cc-by-4.0
multilinguality:
  - monolingual
pretty_name: WIESP2022-NER
size_categories:
  - 1K<n<10K
source_datasets: []
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition

Dataset for the first Workshop on Information Extraction from Scientific Publications (WIESP/2022).

Dataset Description

Datasets with text fragments from astrophysics papers, provided by the NASA Astrophysical Data System with manually tagged astronomical facilities and other entities of interest (e.g., celestial objects).
Datasets are in JSON Lines format (each line is a json dictionary).
The datasets are formatted similarly to the CONLL2003 format. Each token is associated with an NER tag. The tags follow the "B-" and "I-" convention from the IOB2 syntax

Each entry consists of a dictionary with the following keys:

  • "unique_id": a unique identifier for this data sample. Must be included in the predictions.
  • "tokens": the list of tokens (strings) that form the text of this sample. Must be included in the predictions.
  • "ner_tags": the list of NER tags (in IOB2 format)

The following keys are not strictly needed by the participants:

  • "ner_ids": the pre-computed list of ids corresponding ner_tags, as given by the dictionary in ner_tags.json
  • "label_studio_id", "section", "bibcode": references for internal NASA/ADS use.

Instructions for Workshop participants:

How to load the data using the Huggingface library:

from datasets import load_dataset
dataset = load_dataset("adsabs/WIESP2022-NER")

How to load the data if you cloned the repository locally:
(assuming ./WIESP2022-NER-DEV.jsonl is in the current directory, change as needed)

  • python (as list of dictionaries):
import json
with open("./WIESP2022-NER-DEV.jsonl", 'r') as f:
    wiesp_dev_json = [json.loads(l) for l in list(f)]
  • into Huggingface (as a Huggingface Dataset):
from datasets import Dataset
wiesp_dev_from_json = Dataset.from_json(path_or_paths="./WIESP2022-NER-DEV.jsonl")

How to compute your scores on the training data:

  1. format your predictions as a list of dictionaries, each with the same "unique_id" and "tokens" keys from the dataset, as well as the list of predicted NER tags under the "pred_ner_tags" key (see WIESP2022-NER-DEV-sample-predictions.jsonl for an example).
  2. pass the references and predictions datasets to the compute_MCC() and compute_seqeval() functions (from the .py files with the same names).

Requirement to run the scoring scripts:
NumPy
scikit-learn
seqeval

To get scores on the validation data, zip your predictions file (a single .jsonl' file formatted following the same instructions as above) and upload the .zip` file to the Codalabs competition.

File list

β”œβ”€β”€ WIESP2022-NER-TRAINING.jsonl : 1753 samples for training.
β”œβ”€β”€ WIESP2022-NER-DEV.jsonl : 20 samples for development.
β”œβ”€β”€ WIESP2022-NER-DEV-sample-predictions.jsonl : an example file with properly formatted predictions on the development data.
β”œβ”€β”€ WIESP2022-NER-VALIDATION-NO-LABELS.jsonl : 1366 samples for validation without the NER labels. Used for the WIESP2022 workshop.
β”œβ”€β”€ WIESP2022-NER-VALIDATION.jsonl : 1366 samples for validation
β”œβ”€β”€ WIESP2022-NER-TESTING-NO-LABELS.jsonl : 2505 samples for testing without the NER labels. Used for the WIESP2022 workshop.
β”œβ”€β”€ WIESP2022-NER-TESTING.jsonl : 2505 samples for testing
β”œβ”€β”€ README.MD : this file.
β”œβ”€β”€ tag_definitions.md : short descriptions and examples of the tags used in the task.
└── scoring-scripts/ : scripts used to evaluate submissions.
    β”œβ”€β”€ compute_MCC.py : computes the Matthews correlation coefficient between two datasets.
    └── compute_seqeval.py : computes the seqeval scores (precision, recall, f1, overall and for each class) between two datasets.

Cite as

Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature (DEAL) (Grezes et al., WIESP 2022)

@inproceedings{grezes-etal-2022-overview,
    title = "Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})",
    author = "Grezes, Felix  and
      Blanco-Cuaresma, Sergi  and
      Allen, Thomas  and
      Ghosal, Tirthankar",
    booktitle = "Proceedings of the first Workshop on Information Extraction from Scientific Publications",
    month = "nov",
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.wiesp-1.1",
    pages = "1--7",
    abstract = "In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.",
}