JordiBayarri
commited on
Commit
•
ab5ac55
1
Parent(s):
7eeb193
Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ Aloe: A Family of Fine-tuned Open Healthcare LLMs
|
|
39 |
|
40 |
Llama3.1-Aloe-70B-Beta is an **open healthcare LLM** (released with a permissive CC-BY license) achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-70B). Both models are trained using the same recipe. All necessary resources and details are made available below.
|
41 |
|
42 |
-
Aloe is trained in
|
43 |
|
44 |
# Aloe-70B-Beta
|
45 |
|
|
|
39 |
|
40 |
Llama3.1-Aloe-70B-Beta is an **open healthcare LLM** (released with a permissive CC-BY license) achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama31-Aloe-Beta-70B). Both models are trained using the same recipe. All necessary resources and details are made available below.
|
41 |
|
42 |
+
Aloe is trained in 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4 and Medprompt. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
|
43 |
|
44 |
# Aloe-70B-Beta
|
45 |
|