JordiBayarri commited on
Commit
da1dd9e
1 Parent(s): 9433df3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +387 -0
README.md ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ datasets:
4
+ - HPAI-BSC/Aloe-Beta-General-Collection
5
+ - HPAI-BSC/chain-of-diagnosis
6
+ - HPAI-BSC/MedS-Ins
7
+ - HPAI-BSC/ultramedical
8
+ - HPAI-BSC/pubmedqa-cot-llama31
9
+ - HPAI-BSC/medqa-cot-llama31
10
+ - HPAI-BSC/medmcqa-cot-llama31
11
+ - HPAI-BSC/headqa-cot-llama31
12
+ - HPAI-BSC/MMLU-medical-cot-llama31
13
+ - HPAI-BSC/Polymed-QA
14
+ - HPAI-BSC/Aloe-Beta-General-Collection
15
+ - HPAI-BSC/Aloe-Beta-General-Collection
16
+ language:
17
+ - en
18
+ library_name: transformers
19
+ tags:
20
+ - biology
21
+ - medical
22
+ - healthcare
23
+ pipeline_tag: question-answering
24
+ base_model:
25
+ - meta-llama/Llama-3.1-8B
26
+ ---
27
+ <p align="center">
28
+ <picture>
29
+ <source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png">
30
+ <img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png" width=50%>
31
+ </picture>
32
+ </p>
33
+ <h1 align="center">
34
+ Aloe: A Family of Fine-tuned Open Healthcare LLMs
35
+ </h1>
36
+
37
+ ---
38
+
39
+
40
+ Llama3.1-Aloe-70B-Beta is an **open healthcare LLM** (released with a permissive CC-BY license) achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-70B). Both models are trained using the same recipe. All necessary resources and details are made available below.
41
+
42
+ Aloe is trained in 10 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4 and Medprompt. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
43
+
44
+ # Aloe-70B-Beta
45
+
46
+
47
+
48
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/VUYw4IdANKGrH2VOedwH0.png)
49
+
50
+ Aloe-70B-Beta is the latest iteration in the Aloe family, building and improving on the success of its predecessor, [Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) in a larger model size.
51
+ Beta more than triples the training data used by Alpha, for a total of 1.8B tokens, including a wider variety of medical tasks and instructions (e.g., text summarization, explanation, diagnosis, text classification, treatment recommendation, ...).
52
+
53
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/bCuV5kZUT9H9UECAOWDRc.png)
54
+
55
+ Beta also boosts the alignment and safety stages with respect to Alpha. This includes a [medical preference dataset](https://huggingface.co/datasets/TsinghuaC3I/UltraMedical-Preference), as well as the red-teaming dataset (available soon).
56
+
57
+ Complete training details, model merging configurations, and all training data (including synthetically generated data) can be found below. This includes [the RAG system](https://github.com/HPAI-BSC/prompt_engine) that was developed to test Aloe Beta in a deployment setup. Aloe comes with a healthcare-specific risk assessment to facilitate to the safe use and deployment of such systems.
58
+
59
+
60
+ ## Model Details
61
+
62
+ ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description
63
+
64
+ - **Developed by:** [HPAI](https://hpai.bsc.es/)
65
+ - **Model type:** Causal decoder-only transformer language model
66
+ - **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
67
+ - **License:** This model is based on Meta Llama 3.1 70B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
68
+ - **Base model :** [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
69
+ - **Paper:** (more coming soon)
70
+ - **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
71
+
72
+ ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional]
73
+
74
+ ## Model Performance
75
+
76
+ Aloe Beta has been tested on the most popular healthcare QA datasets, with and without Medprompt inference technique. Results show competitive performance, achieving SOTA within models of the same size.
77
+
78
+
79
+
80
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/s8rWbwpYTkar5_X_LnOhb.png)
81
+
82
+ More evaluations coming soon!
83
+
84
+ <!---
85
+ The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:
86
+
87
+
88
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ZABYUxpQRMDcrJmKhkEfz.png)
89
+
90
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/2NW3im0aH2u6RKp969sjx.png)
91
+
92
+ We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
93
+
94
+
95
+
96
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/imK19fzyMUvIJaAbSVnGE.png)
97
+ -->
98
+ ## Uses
99
+
100
+ ### Direct Use
101
+
102
+ We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. In production, Aloe should always be used under the supervision of a human expert.
103
+
104
+ ### Out-of-Scope Use
105
+
106
+ These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful to individuals, such as spam, fraud, or impersonation, is strictly prohibited. Minors should not be left alone to interact with Aloe without supervision.
107
+
108
+ ## Bias, Risks, and Limitations
109
+
110
+ Aloe can produce toxic content under the appropriate prompts, and it includes multiple undesirable biases. While significant efforts where conducted to mitigate this (see Alignment details below), model safety cannot be fully guaranteed. We avoid the use of all personal data in our training.
111
+
112
+ We identify at least three risk cases specific of healthcare LLMs:
113
+ - Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers.
114
+ - Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defenses, together with the introduction of disclaimers and warnings on the models' outputs.
115
+ - Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, the internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it.
116
+
117
+
118
+ <!---
119
+ Table below shows the performance of Aloe at several AI safety tasks:
120
+
121
+ TO BE UPDATED
122
+
123
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%">
124
+
125
+
126
+ We analyzed the safety and robustness of the model using red teaming techniques. We designed a benchmark using different types of attacks and analyzed the performance of Aloe and some extra models, and we confirm that our model is aligned properly and successfully resisting most attacks:
127
+
128
+
129
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/KS3yrHan1l1W0cYiXGG-G.png)
130
+
131
+
132
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/SYC0qljpLGLmMgx0a623W.png)
133
+
134
+ -->
135
+
136
+ ## How to Get Started with the Model
137
+
138
+ Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples for both.
139
+
140
+ #### Transformers pipeline
141
+
142
+ ```python
143
+ import transformers
144
+ import torch
145
+
146
+ model_id = "HPAI-BSC/Llama31-Aloe-70B-Beta"
147
+
148
+ pipeline = transformers.pipeline(
149
+ "text-generation",
150
+ model=model_id,
151
+ model_kwargs={"torch_dtype": torch.bfloat16},
152
+ device_map="auto",
153
+ )
154
+
155
+ messages = [
156
+ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
157
+ {"role": "user", "content": "Hello."},
158
+ ]
159
+
160
+ prompt = pipeline.tokenizer.apply_chat_template(
161
+ messages,
162
+ tokenize=False,
163
+ add_generation_prompt=True
164
+ )
165
+
166
+ terminators = [
167
+ pipeline.tokenizer.eos_token_id,
168
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
169
+ ]
170
+
171
+ outputs = pipeline(
172
+ prompt,
173
+ max_new_tokens=256,
174
+ eos_token_id=terminators,
175
+ do_sample=True,
176
+ temperature=0.6,
177
+ top_p=0.9,
178
+ )
179
+ print(outputs[0]["generated_text"][len(prompt):])
180
+ ```
181
+
182
+ #### Transformers AutoModelForCausalLM
183
+
184
+ ```python
185
+ from transformers import AutoTokenizer, AutoModelForCausalLM
186
+ import torch
187
+
188
+ model_id = "HPAI-BSC/Llama31-Aloe-Beta-8B"
189
+
190
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
191
+ model = AutoModelForCausalLM.from_pretrained(
192
+ model_id,
193
+ torch_dtype=torch.bfloat16,
194
+ device_map="auto",
195
+ )
196
+
197
+ messages = [
198
+ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
199
+ {"role": "user", "content": "Hello"},
200
+ ]
201
+
202
+ input_ids = tokenizer.apply_chat_template(
203
+ messages,
204
+ add_generation_prompt=True,
205
+ return_tensors="pt"
206
+ ).to(model.device)
207
+
208
+ terminators = [
209
+ tokenizer.eos_token_id,
210
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
211
+ ]
212
+
213
+ outputs = model.generate(
214
+ input_ids,
215
+ max_new_tokens=256,
216
+ eos_token_id=terminators,
217
+ do_sample=True,
218
+ temperature=0.6,
219
+ top_p=0.9,
220
+ )
221
+ response = outputs[0][input_ids.shape[-1]:]
222
+ print(tokenizer.decode(response, skip_special_tokens=True))
223
+ ```
224
+
225
+ ## Training Details
226
+
227
+ ### Supervised fine-tuning
228
+ SFT on top of Llama 3.1 using axolotl (https://github.com/axolotl-ai-cloud/axolotl).
229
+
230
+ We used Deepspeed's Zero-3 distributed training using the following hardware:
231
+
232
+ * 8B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
233
+ * 70B: 64x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
234
+
235
+
236
+ <!---
237
+ ^^^ TO BE COMPLETED AND DETAILED ^^^
238
+ -->
239
+
240
+
241
+
242
+ #### Training Data
243
+
244
+ The training set consists of around 1.8B tokens, having 3 different types of data:
245
+
246
+ - Medical domain datasets:
247
+ - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
248
+ - [HPAI-BSC/chain-of-diagnosis](https://huggingface.co/datasets/HPAI-BSC/chain-of-diagnosis)
249
+ - [HPAI-BSC/MedS-Ins](https://huggingface.co/datasets/HPAI-BSC/MedS-Ins)
250
+ - [HPAI-BSC/ultramedica](https://huggingface.co/datasets/HPAI-BSC/ultramedical)
251
+ - Synthetic data generated using Llama3.1:
252
+ - [HPAI-BSC/pubmedqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot-llama31)
253
+ - [HPAI-BSC/medqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medqa-cot-llama31)
254
+ - [HPAI-BSC/medmcqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot-llama31)
255
+ - [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
256
+ - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
257
+ - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
258
+ - Genstruct data (coming soon)
259
+ - General data:
260
+ - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
261
+
262
+ #### Training parameters
263
+ - Epochs: 4
264
+ - Sequence length: 16384
265
+ - Optimizer: adamw_torch
266
+ - Learning rate: 2e-5
267
+ - Learning rate scheduler: cosine
268
+ - Warmup steps: 100
269
+ - Weight decay: 0
270
+ - Gradient checkpointing
271
+ - Zero 3
272
+ - Total batch size: 128
273
+ - Batch size per device: 1
274
+ - Gradient accumulation steps: 2
275
+
276
+ ### Model Merging
277
+ The model trained was merged with the Llama-3.1-Instruct model using the DARE_TIES technique. [Mergekit](https://github.com/arcee-ai/mergekit) was used to conduct the merging.
278
+
279
+ ### Model Alignment
280
+ The model is aligned using the Direct Preference Optimization (DPO) technique through a two-step process:
281
+
282
+ 1. General DPO Alignment: This step uses a dataset combining medical, general preference, and safety data. We used our dataset [HPAI-BSC/Aloe-Beta-DPO](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-DPO). We split the dataset into five parts, and the model was trained iteratively for one epoch on each chunk. We used a learning rate of 2e-7.
283
+ 2. Red-Teaming Alignment: This step further fine-tunes the model to resist a variety of potential attacks, enhancing its robustness and security. Dataset will be shared soon. In this stage, we set the learning rate to 1e-7.
284
+
285
+ <!---
286
+ ^^^ LINKS TO DPO DATA (DPO added, missing the RT^^^
287
+ -->
288
+
289
+
290
+ We used [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) library. We aligned the model using 25x NVIDA HOOPER H100 64GB of the *Marenostrum 5*. Common hyperparameters:
291
+
292
+ - Sequence length: 4096
293
+ - Optimizer: Fused adam
294
+ - Total batch size 100
295
+ - Batch size per device: 1
296
+ - Gradient accumulation steps: 4
297
+ - Beta: 0.1
298
+
299
+
300
+
301
+ ## Evaluation
302
+
303
+ ### Testing Data, Factors & Metrics
304
+
305
+ #### Testing Data
306
+
307
+
308
+ - [ACI-BENCH](https://github.com/wyim/aci-bench)
309
+ - [MTS-Dialog](https://github.com/abachaa/MTS-Dialog)
310
+ - [MedText](https://huggingface.co/datasets/BI55/MedText)
311
+ - [Medical Text classification](https://www.kaggle.com/datasets/chaitanyakck/medical-text/data)
312
+ - [OLAPH](https://github.com/dmis-lab/OLAPH)
313
+ - CareQA Open
314
+ - [MedDialog](https://huggingface.co/datasets/bigbio/meddialog)
315
+ - [MEDIQA QA](https://huggingface.co/datasets/bigbio/mediqa_qa)
316
+ - [Meddialog Qsumm](https://huggingface.co/datasets/lighteval/med_dialog)
317
+ - [Biored](https://huggingface.co/datasets/YufeiHFUT/BioRED_all_info)
318
+ - [MIMIC-III](https://huggingface.co/datasets/dmacres/mimiciii-hospitalcourse-meta)
319
+ - [Medical Prescription](https://huggingface.co/datasets/devlocalhost/prescription-full)
320
+ - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
321
+ - [MedMCQA](https://huggingface.co/datasets/medmcqa)
322
+ - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
323
+ - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
324
+ - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
325
+ - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA)
326
+ - [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
327
+
328
+ <!---
329
+ ^^^ CAREQA Open link MISSING ^^^
330
+ -->
331
+
332
+ #### Metrics
333
+
334
+ - Accuracy: suite the evaluation of multiple-choice question-answering tasks.
335
+ - Rouge1: refers to the overlap of unigrams between the system and the gold standard.
336
+
337
+
338
+ <!---
339
+ ^^^ MORE METRICS MISSING ^^^
340
+ -->
341
+
342
+ #### Summary
343
+
344
+ To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. We produce the standard MultiMedQA score for reference, by computing the weighted average accuracy on all scores except CareQA. Additionally, we calculate the arithmetic mean across all datasets. The Medical MMLU is calculated by averaging the six medical subtasks: Anatomy, Clinical knowledge, College Biology, College medicine, Medical genetics, and Professional medicine.
345
+
346
+ Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama3-8B-Instruct. Llama3-Aloe-8B-Alpha outperforms larger models like Meditron 70B, and is close to larger base models, like Yi-34. For the former, this gain is consistent even when using SC-CoT, using their best-reported variant. All these results make Llama3-Aloe-8B-Alpha the best healthcare LLM of its size.
347
+
348
+ With the help of prompting techniques the performance of Llama3-Aloe-8B-Alpha is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama3-Aloe-8B-Alpha only lags behind the ten times bigger Llama-3-70B-Instruct. This improvement is mostly consistent across medical fields. Llama3-Aloe-8B-Alpha with medprompting beats the performance of Meditron 70B with their self reported 20 shot SC-CoT in MMLU med and is slightly worse in the other benchmarks.
349
+
350
+ ## Environmental Impact
351
+
352
+ - **Hardware Type:** 64xH100
353
+ - **Hours used (8B):** 544 GPU hours
354
+ - **Hours used (70B):** 4500 GPU hours
355
+ - **Hardware Provider:** Barcelona Supercomputing Center (BSC)
356
+ - **Compute Region:** Spain
357
+ - **Carbon Emitted:** 34.1 kg of CO2
358
+
359
+ <!---
360
+ ^^^ ARE CARBON EMISSIONS FOR BOTH? ^^^
361
+ -->
362
+
363
+
364
+ ## Authors
365
+ Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), Ashwin Kumar Gururajan and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
366
+
367
+ mailto:hpai@bsc.es
368
+
369
+ ## Citations
370
+
371
+
372
+ <!---
373
+ Add the prompt engine paper below
374
+ -->
375
+
376
+ If you use this repository in a published work, please cite the corresponding papers as source:
377
+
378
+ ```
379
+ @misc{gururajan2024aloe,
380
+ title={Aloe: A Family of Fine-tuned Open Healthcare LLMs},
381
+ author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla},
382
+ year={2024},
383
+ eprint={2405.01886},
384
+ archivePrefix={arXiv},
385
+ primaryClass={cs.CL}
386
+ }
387
+ ```