Datasets:
metadata
language:
- en
- es
- fr
- it
license: apache-2.0
pretty_name: Multilingual Medical Corpus
tags:
- medical
dataset_info:
features:
- name: text
dtype: string
splits:
- name: en
num_bytes: 7672665166
num_examples: 21226237
- name: es
num_bytes: 6245812986
num_examples: 35444286
- name: fr
num_bytes: 4763269707
num_examples: 7192779
- name: it
num_bytes: 1021535232
num_examples: 3504555
download_size: 10530951092
dataset_size: 19703283091
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: es
path: data/es-*
- split: fr
path: data/fr-*
- split: it
path: data/it-*
Mutilingual Medical Corpus
Multilingual-Medical-Corpus a 3 billion word multilingual corpus for training LLMs adapted to the medical domain. Multilingual-Medical-Corpus includes four languages, namely, English, Spanish, French, and Italian.
- 📖 Paper: Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain
- 🌐 Project Website: https://univ-cotedazur.eu/antidote
Corpus Description
- Developed by: Iker García-Ferrero, Rodrigo Agerri, Aitziber Atutxa Salazar, Elena Cabrio, Iker de la Iglesia, Alberto Lavelli, Bernardo Magnini, Benjamin Molinet, Johana Ramirez-Romero, German Rigau, Jose Maria Villa-Gonzalez, Serena Villata and Andrea Zaninello
- Contact: Iker García-Ferrero and Rodrigo Agerri
- Website: https://univ-cotedazur.eu/antidote
- Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR
- Language(s) (NLP): English, Spanish, French, Italian
- License: apache-2.0
Language | Source | Words |
---|---|---|
English | ClinicalTrials | 127.4M |
EMEA | 12M | |
PubMed | 968.4M | |
Spanish | EMEA | 13.6M |
PubMed | 8.4M | |
Medical Crawler | 918M | |
SPACC | 350K | |
UFAL | 10.5M | |
WikiMed | 5.2M | |
French | PubMed | 1.4M |
Science Direct | 15.2M | |
Wikipedia - Médecine | 5M | |
EDP | 48K | |
Google Patents | 654M | |
Italian | Medical Commoncrawl - IT | 67M |
Drug instructions | 30.5M | |
Wikipedia - Medicina | 13.3M | |
E3C Corpus - IT | 11.6M | |
Medicine descriptions | 6.3M | |
Medical theses | 5.8M | |
Medical websites | 4M | |
PubMed | 2.3M | |
Supplement description | 1.3M | |
Medical notes | 975K | |
Pathologies | 157K | |
Medical test simulations | 26K | |
Clinical cases | 20K |
Open Source Models trained with Multilingual-Medical-Corpus:
HiTZ/Medical-mT5-large | HiTZ/Medical-mT5-xl | HiTZ/Medical-mT5-large-multitask | HiTZ/Medical-mT5-xl-multitask | |
---|---|---|---|---|
Param. no. | 738M | 3B | 738M | 3B |
Task | Language Modeling | Language Modeling | Multitask Sequence Labeling | Multitask Sequence Labeling |
Citation
We will soon release a paper, but, for now, you can use:
@inproceedings{medical-mt5,
title = "{{Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain}}",
author = "{Iker García-Ferrero and Rodrigo Agerri and Aitziber Atutxa Salazar and Elena Cabrio and Iker de la Iglesia and Alberto Lavelli and Bernardo Magnini and Benjamin Molinet and Johana Ramirez-Romero and German Rigau and Jose Maria Villa-Gonzalez and Serena Villata and Andrea Zaninello}",
publisher = "Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING)",
year = 2024 }