JordiBayarri commited on
Commit
c877fb8
1 Parent(s): 70cd22a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -83,24 +83,21 @@ Aloe Beta has been tested on the most popular healthcare QA datasets, with and w
83
 
84
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/Ad9Rs3rh_z3LxuqdcKdpy.png)
85
 
86
- <!---
87
- The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:
88
-
89
 
90
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ZABYUxpQRMDcrJmKhkEfz.png)
91
 
92
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/2NW3im0aH2u6RKp969sjx.png)
93
 
94
 
 
95
 
96
- -->
97
 
98
  We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
99
 
100
 
101
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/UKW36y9yjqn3Q5OfrCuIc.png)
102
 
103
- More evaluations coming soon!
104
 
105
 
106
  ## Uses
@@ -263,7 +260,7 @@ The training set consists of around 1.8B tokens, having 3 different types of dat
263
  - [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
264
  - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
265
  - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
266
- - General data. It includes maths, STEM, code, function calling, and instruction of very long instructions.
267
  - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
268
 
269
  #### Training parameters
 
83
 
84
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/Ad9Rs3rh_z3LxuqdcKdpy.png)
85
 
 
 
 
86
 
87
+ The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical benchmarks:
88
 
89
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/lPcEzQbWRq13H6tN_mEg5.png)
90
 
91
 
92
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/ORkSfVkwXqefEtDnIBMOJ.png)
93
 
 
94
 
95
  We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:
96
 
97
 
98
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/UKW36y9yjqn3Q5OfrCuIc.png)
99
 
100
+
101
 
102
 
103
  ## Uses
 
260
  - [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
261
  - [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
262
  - [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
263
+ - General data. It includes maths, STEM, code, function calling, and instruction with very long context.
264
  - [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
265
 
266
  #### Training parameters