Edit model card

SpanMarker with allenai/scibert_scivocab_uncased on my-data

This is a SpanMarker model that can be used for Named Entity Recognition. This SpanMarker model uses allenai/scibert_scivocab_uncased as the underlying encoder.

Model Details

Model Description

  • Model Type: SpanMarker
  • Encoder: allenai/scibert_scivocab_uncased
  • Maximum Sequence Length: 256 tokens
  • Maximum Entity Length: 8 words
  • Language: en
  • License: cc-by-sa-4.0

Model Sources

Model Labels

Label Examples
Data "an overall mitochondrial", "defect", "Depth time - series"
Material "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits"
Method "EFSA", "an approximation", "in vitro"
Process "translation", "intake", "a significant reduction of synthesis"

Evaluation

Metrics

Label Precision Recall F1
all 0.6981 0.6732 0.6854
Data 0.6269 0.6402 0.6335
Material 0.8085 0.7562 0.7815
Method 0.4211 0.4 0.4103
Process 0.6891 0.6488 0.6683

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")
# Run inference
entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span-marker-allenai/scibert_scivocab_uncased-me-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 3 25.6049 106
Entities per sentence 0 5.2439 22

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
2.0134 300 0.0476 0.7297 0.5821 0.6476 0.7880
4.0268 600 0.0532 0.7537 0.6775 0.7136 0.8281
6.0403 900 0.0655 0.7162 0.7080 0.7121 0.8357
8.0537 1200 0.0761 0.7143 0.7061 0.7102 0.8251

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.36.2
  • PyTorch: 2.0.1+cu118
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}
Downloads last month
7
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zhang19991111/scibert-spanmarker-STEM-NER

Finetuned
(53)
this model

Evaluation results