Citation Parsing (NER)
The Citation Parsing (NER) model utilizes advanced Named Entity Recognition (NER) to extract key fields from citation texts. This model parses citations into structured data fields such as TITLE, AUTHORS, VOLUME, ISSUE, YEAR, DOI, ISSN, ISBN, FIRST_PAGE, LAST_PAGE, JOURNAL, and EDITOR.
Overview
Click to expand
- Model type: Language Model
- Architecture: DistilBERT
- Language: Multilingual
- License: Apache 2.0
- Task: Named Entity Recognition (NER) for Citation Parsing
- Dataset: Custom Citation Parsing Dataset
- Additional Resources:
Model description
The Citation Parsing (NER) model is part of the Citation Parser
package. It is fine-tuned for extracting structured information from citation texts into the following key fields:
TITLE
AUTHORS
VOLUME
ISSUE
YEAR
DOI
ISSN
ISBN
FIRST_PAGE
LAST_PAGE
JOURNAL
EDITOR
This model was trained using the DistilBERT-base-multilingual-cased architecture, making it capable of processing multilingual citation data.
Intended Usage
This model is designed for extracting citation information and parsing raw citation text into structured fields. It is ideal for automating citation metadata extraction in academic databases, manuscript workflows, or citation analysis tools.
How to use
from transformers import pipeline
# Load the model
citation_parser = pipeline("ner", model="SIRIS-Lab/citation-parser-ENTITY")
# Example citation text
citation_text = "MURAKAMI, Hç‰: 'Unique thermal behavior of acrylic PSAs bearing long alkyl side groups and crosslinked by aluminum chelate', 《EUROPEAN POLYMER JOURNAL》"
# Parse the citation
result = citation_parser(citation_text)
print(result)
Training
The model was trained using the SIRIS-Lab/citation-parser-ENTITY
dataset consisting of:
- Training data: 2419 samples
- Test data: 269 samples
The following hyperparameters were used for training:
- Base Model:
distilbert/distilbert-base-multilingual-cased
- Batch Size: 16
- Number of Epochs: 10
- Learning Rate: 2e-5
- Weight Decay: 0.01
- Max Sequence Length: 512
Evaluation Metrics
The model's performance was evaluated on the test set, and the following results were obtained:
Metric | Value |
---|---|
Overall Precision | 0.9448 |
Overall Recall | 0.9548 |
Overall F1 | 0.9498 |
Overall Accuracy | 0.9759 |
Class-wise Evaluation Metrics:
Entity | Precision | Recall | F1 | Samples |
---|---|---|---|---|
ALL (overall avg) | 0.9448 | 0.9548 | 0.9498 | 269 |
---------------------------- | ----------- | --------- | --------- | ----------------------- |
AUTHORS | 0.9577 | 0.9468 | 0.9522 | 263 |
DOI | 0.8333 | 0.9091 | 0.8696 | 22 |
ISBN | 1.0000 | 1.0000 | 1.0000 | 3 |
ISSN | 1.0000 | 1.0000 | 1.0000 | 34 |
ISSUE | 0.9385 | 0.9683 | 0.9531 | 63 |
JOURNAL | 0.8819 | 0.9228 | 0.9019 | 259 |
LINK_ONLINE_AVAILABILITY | 0.3333 | 0.5000 | 0.4000 | 2 |
PAGE_FIRST | 1.0000 | 1.0000 | 1.0000 | 130 |
PAGE_LAST | 0.9915 | 0.9832 | 0.9873 | 119 |
PUBLICATION_YEAR | 0.9797 | 0.9732 | 0.9764 | 149 |
PUBLISHER | 0.4231 | 0.5238 | 0.4681 | 21 |
TITLE | 0.9911 | 0.9867 | 0.9889 | 226 |
VOLUME | 0.9597 | 0.9520 | 0.9558 | 125 |
Additional Information
Authors
SIRIS Lab, Research Division of SIRIS Academic.
License
This work is distributed under an Apache License, Version 2.0.
Contact
For further information, send an email to either nicolau.duransilva@sirisacademic.com or info@sirisacademic.com.
- Downloads last month
- 43