ANGEL_ncbi / README.md
dmis-lab's picture
Update README.md
df2ccce verified
metadata
license: gpl-3.0
language:
  - en
metrics:
  - accuracy
base_model: dmis-lab/ANGEL_pretrained

Model Card for ANGEL_ncbi

This model card provides detailed information about the ANGEL_ncbi model, designed for biomedical entity linking.

Model Details

Model Description

  • Developed by: Chanhwi Kim, Hyunjae Kim, Sihyeon Park, Jiwoo Lee, Mujeen Sung, Jaewoo Kang
  • Model type: Generative Biomedical Entity Linking Model
  • Language(s): English
  • License: GPL-3.0
  • Finetuned from model: BART-large (Base architecture)

Model Sources

Direct Use

ANGEL_ncbi is a tool specifically designed for biomedical entity linking, with a focus on identifying and linking disease mentions within NCBI-disease datasets. To use this model, you need to set up a virtual environment and the inference code. Start by cloning our ANGEL GitHub repository. Then, run the following script to set up the environment:

bash script/environment/set_environment.sh

Then, if you want to run the model on a single sample, no preprocessing is required. Simply execute the run_sample.sh script:

bash script/inference/run_sample.sh ncbi

To modify the sample with your own example, refer to the Direct Use section in our GitHub repository. If you're interested in training or evaluating the model, check out the Fine-tuning section and Evaluation section.

Training

Training Data

The model was trained on the NCBI-disease dataset, which includes annotated disease entities.

Training Procedure

Positive-only Pre-training: Initial training using only positive examples, following the standard approach. Negative-aware Training: Subsequent training incorporated negative examples to improve the model's discriminative capabilities.

Evaluation

Testing Data

The model was evaluated using NCBI-disease dataset.

Metrics

Accuracy at Top-1 (Acc@1): Measures the percentage of times the model's top prediction matches the correct entity.

Scores

Dataset BioSYN
(Sung et al., 2020)
SapBERT
(Liu et al., 2021)
GenBioEL
(Yuan et al., 2022b)
ANGEL
(Ours)
NCBI 91.1 92.3 91.0 92.8

The scores of GenBioEL were reproduced.

Citation

If you use the ANGEL_ncbi model, please cite:

@article{kim2024learning,
  title={Learning from Negative Samples in Generative Biomedical Entity Linking},
  author={Kim, Chanhwi and Kim, Hyunjae and Park, Sihyeon and Lee, Jiwoo and Sung, Mujeen and Kang, Jaewoo},
  journal={arXiv preprint arXiv:2408.16493},
  year={2024}
}

Contact

For questions or issues, please contact chanhwi_kim@korea.ac.kr.