manishiitg
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- hi
|
5 |
+
- en
|
6 |
+
base_model: teknium/OpenHermes-2.5
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
+
Model trained on Hindi and English data.
|
11 |
+
|
12 |
+
Try it out: https://colab.research.google.com/drive/1A_hbsq1vrCeAh3dEMvtwxxNxcNZ1BUyW?usp=sharing
|
13 |
+
|
14 |
+
For sample responose on different prompts checkout: https://github.com/manishiitg/hi-llm-eval
|
15 |
+
|
16 |
+
|
17 |
+
#### Language Hi
|
18 |
+
|
19 |
+
| Model | xlsum-hi | truthfulqa-hi | indic-arc-easy | mmlu_hi | indicqa | flores | indicheadline | indicxparaphrase | hellaswag-indic | indicwikibio | boolq-hi | implicit_hate | indic-arc-challenge | indicsentiment |
|
20 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
21 |
+
| open-aditi-hi-v2 | 0.4213 | 0.6934 | 0.4979 | 0.3253 | 0.0795 | 43.6822 | 0.4565 | 0.6838 | 0.2404 | 0.4846 | 0.8541 | 11.5021 | 0.4462 | 0.9729 |
|
22 |
+
| open-aditi-hi-v3 | 0.4490 | 0.5369 | 0.5480 | 0.1351 | 0.0058 | 48.2859 | 0.4682 | 0.8846 | 0.4891 | 0.5034 | 0.5401 | 8.8315 | 0.4633 | 0.9519 |
|
23 |
+
| open-aditi-hi-v4 | 0.4046 | 0.7671 | 0.4529 | 0.2124 | 0.0026 | 47.8500 | 0.1980 | 0.7737 | 0.3595 | 0.4894 | 0.7015 | 5.9709 | 0.3857 | 0.9699 |
|
24 |
+
| OpenHermes-2.5-Mistral-7B | 0.1774 | 0.3234 | 0.3523 | 0.2769 | 0.2721 | 30.3465 | 0.1996 | 0.8766 | 0.2485 | 0.3332 | 0.5979 | 0.2068 | 0.3396 | 0.9048 |
|
25 |
+
| OpenHermes-2.5-Mistral-7B-AWQ | 0.1894 | 0.3428 | 0.3291 | 0.2750 | 0.3116 | 29.3681 | 0.2062 | 0.8536 | 0.2479 | 0.3067 | 0.5272 | 6.0594 | 0.3157 | 0.9218 |
|
26 |
+
| open-aditi-hi-v1 | 0.4212 | 0.4230 | 0.3889 | 0.1398 | 0.1306 | 40.2376 | 0.4248 | 0.5939 | 0.0848 | 0.4104 | 0.3758 | 8.6105 | 0.3558 | 0.8798 |
|
27 |
+
| Airavata | 0.4650 | 0.0466 | 0.1128 | 0.1336 | 0.0155 | 58.5260 | 0.4346 | 0.6419 | 0.0550 | 0.0637 | 0.0128 | 6.3612 | 0.0836 | 0.0992 |
|
28 |
+
|
29 |
+
#### Language En
|
30 |
+
|
31 |
+
| Model | boolq | truthfulqa | arc-easy-exact | mmlu | hellaswag | xlsum | arc-challenge |
|
32 |
+
| --- | --- | --- | --- | --- | --- | --- | --- |
|
33 |
+
| open-aditi-hi-v4 | 0.3905 | 0.3378 | 0.8460 | 0.5725 | 0.7603 | 0.4384 | 0.7491 |
|
34 |
+
| OpenHermes-2.5-Mistral-7B | 0.4061 | 0.2081 | 0.8687 | 0.5991 | 0.7999 | 0.4328 | 0.7790 |
|
35 |
+
| OpenHermes-2.5-Mistral-7B-AWQ | 0.4199 | 0.1897 | 0.8569 | 0.5816 | 0.7826 | 0.4317 | 0.7611 |
|
36 |
+
| open-aditi-hi-v3 | 0.3749 | 0.3097 | 0.8384 | 0.5478 | 0.7645 | 0.4352 | 0.7415 |
|
37 |
+
| open-aditi-hi-v2 | 0.3982 | 0.2999 | 0.8388 | 0.5544 | 0.4738 | 0.4349 | 0.7235 |
|
38 |
+
| open-aditi-hi-v1 | 0.0434 | 0.3317 | 0.7588 | 0.2597 | 0.3509 | 0.4288 | 0.6271 |
|
39 |
+
| Airavata | 0.5086 | 0.3574 | 0.6772 | 0.1165 | 0.1799 | 0.4393 | 0.1630 |
|
40 |
+
|
41 |
+
Task: flores Metric: chrf
|
42 |
+
|
43 |
+
Task: implicit_hate Metric: chrf
|
44 |
+
|
45 |
+
Task: indicsentiment Metric: accuracy
|
46 |
+
|
47 |
+
Task: indicxparaphrase Metric: accuracy
|
48 |
+
|
49 |
+
Task: boolq-hi Metric: accuracy
|
50 |
+
|
51 |
+
Task: truthfulqa-hi Metric: accuracy
|
52 |
+
|
53 |
+
Task: indic-arc-easy Metric: accuracy
|
54 |
+
|
55 |
+
Task: indicwikibio Metric: bleurt
|
56 |
+
|
57 |
+
Task: hellaswag-indic Metric: accuracy
|
58 |
+
|
59 |
+
Task: indicheadline Metric: bleurt
|
60 |
+
|
61 |
+
Task: xlsum-hi Metric: bleurt
|
62 |
+
|
63 |
+
Task: indic-arc-challenge Metric: accuracy
|
64 |
+
|
65 |
+
Task: mmlu_hi Metric: average_acc
|
66 |
+
|
67 |
+
Task: indicqa Metric: accuracy
|
68 |
+
|
69 |
+
Task: arc-easy-exact Metric: accuracy
|
70 |
+
|
71 |
+
Task: hellaswag Metric: accuracy
|
72 |
+
|
73 |
+
Task: arc-challenge Metric: accuracy
|
74 |
+
|
75 |
+
Task: mmlu Metric: average_acc
|
76 |
+
|
77 |
+
Task: boolq Metric: accuracy
|
78 |
+
|
79 |
+
Task: xlsum Metric: bleurt
|
80 |
+
|
81 |
+
Task: truthfulqa Metric: accuracy
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
Model evaluation on OpenLLM LeaderBoard
|
87 |
+
|
88 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/ENzZwV2Z98uNlpyUz3Blp.png)
|
89 |
+
|
90 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/SpSiu5lzA6JKJx8ICX_zd.png)
|
91 |
+
|
92 |
+
|
93 |
+
|