metadata
language:
- en
license: apache-2.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- acronym_identification
metrics:
- precision
- recall
- f1
widget:
- text: >-
Here, DA = direct assessment, RR = relative ranking, DS = discrete scale
and CS = continuous scale.
example_title: Example 1
- text: >-
Modifying or replacing the Erasable Programmable Read Only Memory (EPROM)
in a phone would allow the configuration of any ESN and MIN via software
for cellular devices.
example_title: Example 2
- text: >-
We propose a technique called Aggressive Stochastic Weight Averaging
(ASWA) and an extension called Norm-filtered Aggressive Stochastic Weight
Averaging (NASWA) which improves the stability of models over random
seeds.
example_title: Example 3
- text: >-
The choice of the encoder and decoder modules of DNPG can be quite
flexible, for instance long-short term memory networks (LSTM) or
convolutional neural network (CNN).
example_title: Example 4
pipeline_tag: token-classification
co2_eq_emissions:
emissions: 30.818996419923273
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.204
hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: bert-base-cased
model-index:
- name: SpanMarker with bert-base-cased on Acronym Identification
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: Acronym Identification
type: acronym_identification
split: validation
metrics:
- type: f1
value: 0.9336161187698834
name: F1
- type: precision
value: 0.942208904109589
name: Precision
- type: recall
value: 0.9251786464901219
name: Recall
SpanMarker with bert-base-cased on Acronym Identification
This is a SpanMarker model trained on the Acronym Identification dataset that can be used for Named Entity Recognition. This SpanMarker model uses bert-base-cased as the underlying encoder. See train.py for the training script.
Is your data not (always) capitalized correctly? Then consider using the uncased variant of this model instead for better performance: tomaarsen/span-marker-bert-base-uncased-acronyms.
Model Details
Model Description
- Model Type: SpanMarker
- Encoder: bert-base-cased
- Maximum Sequence Length: 256 tokens
- Maximum Entity Length: 8 words
- Training Dataset: Acronym Identification
- Language: en
- License: apache-2.0
Model Sources
- Repository: SpanMarker on GitHub
- Thesis: SpanMarker For Named Entity Recognition
Model Labels
Label | Examples |
---|---|
long | "Conversational Question Answering", "controlled natural language", "successive convex approximation" |
short | "SODA", "CNL", "CoQA" |
Evaluation
Metrics
Label | Precision | Recall | F1 |
---|---|---|---|
all | 0.9422 | 0.9252 | 0.9336 |
long | 0.9308 | 0.9013 | 0.9158 |
short | 0.9479 | 0.9374 | 0.9426 |
Uses
Direct Use for Inference
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-acronyms")
# Run inference
entities = model.predict("Compression algorithms like Principal Component Analysis (PCA) can reduce noise and complexity.")
Downstream Use
You can finetune this model on your own dataset.
Click to expand
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-acronyms")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-bert-base-acronyms-finetuned")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Sentence length | 4 | 32.3372 | 170 |
Entities per sentence | 0 | 2.6775 | 24 |
Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
Training Results
Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
---|---|---|---|---|---|---|
0.3101 | 200 | 0.0083 | 0.9170 | 0.8894 | 0.9030 | 0.9766 |
0.6202 | 400 | 0.0063 | 0.9329 | 0.9149 | 0.9238 | 0.9807 |
0.9302 | 600 | 0.0060 | 0.9279 | 0.9338 | 0.9309 | 0.9819 |
1.2403 | 800 | 0.0058 | 0.9406 | 0.9092 | 0.9247 | 0.9812 |
1.5504 | 1000 | 0.0056 | 0.9453 | 0.9155 | 0.9302 | 0.9825 |
1.8605 | 1200 | 0.0054 | 0.9411 | 0.9271 | 0.9340 | 0.9831 |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Carbon Emitted: 0.031 kg of CO2
- Hours Used: 0.204 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.9.16
- SpanMarker: 1.3.1.dev
- Transformers: 4.30.0
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.0
- Tokenizers: 0.13.2
Citation
BibTeX
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}