Edit model card

drawing

BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains

Abstract:

Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.

1. BioMistral models

BioMistral is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) Jean Zay French HPC.

Model Name Base Model Model Type Sequence Length Download
BioMistral-7B Mistral-7B-Instruct-v0.1 Further Pre-trained 2048 HuggingFace
BioMistral-7B-DARE Mistral-7B-Instruct-v0.1 Merge DARE 2048 HuggingFace
BioMistral-7B-TIES Mistral-7B-Instruct-v0.1 Merge TIES 2048 HuggingFace
BioMistral-7B-SLERP Mistral-7B-Instruct-v0.1 Merge SLERP 2048 HuggingFace

2. Quantized Models

Base Model Method q_group_size w_bit version VRAM GB Time Download
BioMistral-7B FP16/BF16 15.02 x1.00 HuggingFace
BioMistral-7B AWQ 128 4 GEMM 4.68 x1.41 HuggingFace
BioMistral-7B AWQ 128 4 GEMV 4.68 x10.30 HuggingFace
BioMistral-7B BnB.4 4 5.03 x3.25 HuggingFace
BioMistral-7B BnB.8 8 8.04 x4.34 HuggingFace
BioMistral-7B-DARE AWQ 128 4 GEMM 4.68 x1.41 HuggingFace
BioMistral-7B-TIES AWQ 128 4 GEMM 4.68 x1.41 HuggingFace
BioMistral-7B-SLERP AWQ 128 4 GEMM 4.68 x1.41 HuggingFace

2. Using BioMistral

You can use BioMistral with Hugging Face's Transformers library as follow.

Loading the model and tokenizer :

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")

3. Supervised Fine-tuning Benchmark

Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College Medicine MedQA MedQA 5 opts PubMedQA MedMCQA Avg.
BioMistral 7B 59.9 64.0 56.5 60.4 59.0 54.7 50.6 42.8 77.5 48.1 57.3
Mistral 7B Instruct 62.9 57.0 55.6 59.4 62.5 57.2 42.0 40.9 75.7 46.1 55.9
BioMistral 7B Ensemble 62.8 62.7 57.5 63.5 64.3 55.7 50.6 43.6 77.5 48.8 58.7
BioMistral 7B DARE 62.3 67.0 55.8 61.4 66.9 58.0 51.1 45.2 77.7 48.7 59.4
BioMistral 7B TIES 60.1 65.0 58.5 60.5 60.4 56.5 49.5 43.2 77.5 48.1 57.9
BioMistral 7B SLERP 62.5 64.7 55.8 62.7 64.8 56.3 50.8 44.3 77.8 48.6 58.8
MedAlpaca 7B 53.1 58.0 54.1 58.8 58.1 48.6 40.1 33.7 73.6 37.0 51.5
PMC-LLaMA 7B 24.5 27.7 35.3 17.4 30.3 23.3 25.5 20.2 72.9 26.6 30.4
MediTron-7B 41.6 50.3 46.4 27.9 44.4 30.8 41.6 28.1 74.9 41.3 42.7
BioMedGPT-LM-7B 51.4 52.0 49.4 53.3 50.7 49.1 42.5 33.9 76.8 37.6 49.7
GPT-3.5 Turbo 1106* 74.71 74.00 65.92 72.79 72.91 64.73 57.71 50.82 72.66 53.79 66.0

Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.

Citation BibTeX

Arxiv : https://arxiv.org/abs/2402.10373

@misc{labrak2024biomistral,
      title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, 
      author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
      year={2024},
      eprint={2402.10373},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train LoneStriker/BioMistral-7B-6.0bpw-h6-exl2