Datasets:
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- NER
- Aerospace
- ORG
- SYS
- DATETIME
- RESOURCE
- VALUE
pretty_name: all_text_annotation_NER.txt
size_categories:
- n<1K
Dataset Card for aeroBERT-NER
Dataset Description
- Paper: aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT
- Point of Contact: archanatikayatray@gmail.com
Dataset Summary
This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme.
There are a total of 1432 sentences. The creation of this dataset is aimed at -
(1) Making available an open-source dataset for aerospace requirements which are often proprietary
(2) Fine-tuning language models for token identification (NER) specific to the aerospace domain
This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts.
Dataset Structure
The dataset is of the format: Sentence-Number * WordPiece-Token * NER-tag
"*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431.
1431*the*O
1431*airplane*B-SYS
1431*takeoff*O
1431*performance*O
1431*must*O
1431*be*O
1431*determined*O
1431*for*O
1431*climb*O
1431*gradients*O
1431*.*O
Dataset Creation
Source Data
Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT:
(1) general aerospace texts such as publications by the National Academy of Space Studies Board, and
(2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus.
Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-NER")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
Annotations
Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset.
B - Beginning of entity
I - Inside an entity
O - Outside an entity
Category | NER Tags | Example |
---|---|---|
System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit |
Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers |
Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 |
Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA |
Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B |
The distribution of the various entities in the corpus is shown below -
NER Tag | Description | Count |
---|---|---|
O | Tokens that are not identified as any NE | 37686 |
B-SYS | Beginning of a system NE | 1915 |
I-SYS | Inside a system NE | 1104 |
B-VAL | Beginning of a value NE | 659 |
I-VAL | Inside a value NE | 507 |
B-DATETIME | Beginning of a date time NE | 147 |
I-DATETIME | Inside a date time NE | 63 |
B-ORG | Beginning of an organization NE | 302 |
I-ORG | Inside a organization NE | 227 |
B-RES | Beginning of a resource NE | 390 |
I-RES | Inside a resource NE | 1033 |
Limitations
(1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using Accuracy
as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment. Please refer to the Appendix of the paper for information on the test set.
Citation Information
@Article{aeroBERT-NER,
AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.},
TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT},
JOURNAL = {AIAA SCITECH 2023 Forum},
YEAR = {2023},
URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583},
DOI = {10.2514/6.2023-2583}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}