Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,133 @@
|
|
1 |
---
|
2 |
license: cc-by-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-2.0
|
3 |
---
|
4 |
+
# IndoMMLU
|
5 |
+
|
6 |
+
<!---
|
7 |
+
[![evaluation](https://img.shields.io/badge/OpenCompass-Support-royalblue.svg
|
8 |
+
)](https://github.com/internLM/OpenCompass/) [![evaluation](https://img.shields.io/badge/lm--evaluation--harness-Support-blue
|
9 |
+
)](https://github.com/EleutherAI/lm-evaluation-harness)
|
10 |
+
-->
|
11 |
+
|
12 |
+
<p align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/IndoMMLU-Bar.png" style="width: 100%;" id="title-icon">
|
13 |
+
</p>
|
14 |
+
<p align="center"> <a href="http://www.fajrikoto.com" target="_blank">Fajri Koto</a>, <a href="https://www.linkedin.com/in/nuaisyah/" target="_blank">Nurul Aisyah</a>, <a href="https://haonan-li.github.io/" target="_blank">Haonan Li</a>, <a href="https://people.eng.unimelb.edu.au/tbaldwin/" target="_blank">Timothy Baldwin</a> </p>
|
15 |
+
|
16 |
+
<h4 align="center">
|
17 |
+
<p align="center" style="display: flex; flex-direction: row; justify-content: center; align-items: center">
|
18 |
+
π <a href="https://arxiv.org/abs/2310.04928" target="_blank" style="margin-right: 15px; margin-left: 10px">Paper</a> β’
|
19 |
+
π <a href="https://github.com/fajri91/IndoMMLU/blob/main/README_EN.md#evaluation" target="_blank" style="margin-left: 10px">Leaderboard</a> β’
|
20 |
+
π€ <a href="https://huggingface.co/datasets/indolem/indommlu" target="_blank" style="margin-left: 10px">Dataset</a>
|
21 |
+
</p>
|
22 |
+
</h4>
|
23 |
+
|
24 |
+
## Introduction
|
25 |
+
|
26 |
+
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
|
27 |
+
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
|
28 |
+
we obtain 14,906 questions across 63 tasks and education levels, with 46\% of the questions focusing on assessing proficiency
|
29 |
+
in the Indonesian language and knowledge of nine local languages and cultures in Indonesia.
|
30 |
+
<p align="left"> <img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-dist.png?raw=true" style="width: 500px;" id="title-icon"> </p>
|
31 |
+
|
32 |
+
## Subjects
|
33 |
+
|
34 |
+
| Level | Subjects |
|
35 |
+
|-----------|------------------------------------|
|
36 |
+
| SD (Primary School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Dayak Ngaju, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
|
37 |
+
| SMP (Junior High School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
|
38 |
+
| SMA (Senior High School) | Physics, Chemistry, Biology, Geography, Sociology, Economics, History, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Art, Sports, Islam religion, Christian religion, Hindu religion |
|
39 |
+
University Entrance Test | Chemistry, Biology, Geography, Sociology, Economics, History, Indonesian Language |
|
40 |
+
|
41 |
+
We categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.
|
42 |
+
|
43 |
+
## Examples
|
44 |
+
|
45 |
+
These questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.
|
46 |
+
|
47 |
+
<p align="left">
|
48 |
+
<img src="https://github.com/fajri91/eval_picts/blob/master/min_example.png?raw=true" style="width: 400px;" id="title-icon">
|
49 |
+
</p>
|
50 |
+
|
51 |
+
## Evaluation
|
52 |
+
|
53 |
+
We evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes [GPT-3.5 (ChatGPT)](https://chat.openai.com/), [XGLM](https://arxiv.org/abs/2112.10668), [Falcon](https://falconllm.tii.ae/), [BLOOMZ](https://huggingface.co/bigscience/bloomz), [mT0](https://huggingface.co/bigscience/bloomz), [LLaMA](https://arxiv.org/abs/2302.13971), and [Bactrian-X](https://github.com/mbzuai-nlp/bactrian-x). Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:
|
54 |
+
|
55 |
+
```
|
56 |
+
Ini adalah soal [subject] untuk [level]. Pilihlah salah satu jawaban yang dianggap benar!
|
57 |
+
English Translation: This is a [subject] question for [level]. Please choose the correct answer!
|
58 |
+
```
|
59 |
+
|
60 |
+
#### Zero-shot Evaluation
|
61 |
+
|
62 |
+
| Model (#param) | STEM | Social Science | Humanities | Indonesian Lang. | Local L. Culture | Average |
|
63 |
+
|---------------------|------|----------|-------------|---------|----------|---------|
|
64 |
+
| Random | 21.9 | 23.4 | 23.5 | 24.4 | 26.6 | 24.4 |
|
65 |
+
| [GPT-3.5 (175B)](https://chat.openai.com/) | **54.3** | **62.5** | **64.0** | **62.2** | 39.3 | **53.2** |
|
66 |
+
| [XGLM (564M)](https://huggingface.co/facebook/xglm-564M) | 22.1 | 23.0 | 25.6 | 25.6 | 27.5 | 25.2 |
|
67 |
+
| [XGLM (1.7B)](https://huggingface.co/facebook/xglm-1.7B) | 20.9 | 23.0 | 24.6 | 24.8 | 26.6 | 24.4 |
|
68 |
+
| [XGLM (2.9B)](https://huggingface.co/facebook/xglm-2.9B) | 22.9 | 23.2 | 25.4 | 26.3 | 27.2 | 25.2 |
|
69 |
+
| [XGLM (4.5B)](https://huggingface.co/facebook/xglm-4.5B) | 21.8 | 23.1 | 25.6 | 25.8 | 27.1 | 25.0 |
|
70 |
+
| [XGLM (7.5B)](https://huggingface.co/facebook/xglm-7.5B) | 22.7 | 21.7 | 23.6 | 24.5 | 27.5 | 24.5 |
|
71 |
+
| [Falcon (7B)](https://huggingface.co/tiiuae/falcon-7b) | 22.1 | 22.9 | 25.5 | 25.7 | 27.5 | 25.1 |
|
72 |
+
| [Falcon (40B)](https://huggingface.co/tiiuae/falcon-40b) | 30.2 | 34.8 | 34.8 | 34.9 | 29.2 | 32.1 |
|
73 |
+
| [BLOOMZ (560M)](https://huggingface.co/bigscience/bloomz-560m) | 22.9 | 23.6 | 23.2 | 24.2 | 25.1 | 24.0 |
|
74 |
+
| [BLOOMZ (1.1B)](https://huggingface.co/bigscience/bloomz-1b1) | 20.4 | 21.4 | 21.1 | 23.5 | 24.7 | 22.4 |
|
75 |
+
| [BLOOMZ (1.7B)](https://huggingface.co/bigscience/bloomz-1b7) | 31.5 | 39.3 | 38.3 | 42.8 | 29.4 | 34.4 |
|
76 |
+
| [BLOOMZ (3B)](https://huggingface.co/bigscience/bloomz-3b) | 33.5 | 44.5 | 39.7 | 46.7 | 29.8 | 36.4 |
|
77 |
+
| [BLOOMZ (7.1B)](https://huggingface.co/bigscience/bloomz-7b1) | 37.1 | 46.7 | 44.0 | 49.1 | 28.2 | 38.0 |
|
78 |
+
| [mT0<sub>small</sub> (300M)](https://huggingface.co/bigscience/mt0-small) | 21.8 | 21.4 | 25.7 | 25.1 | 27.6 | 24.9 |
|
79 |
+
| [mT0<sub>base</sub> (580M)](https://huggingface.co/bigscience/mt0-base) | 22.6 | 22.6 | 25.7 | 25.6 | 26.9 | 25.0 |
|
80 |
+
| [mT0<sub>large</sub> (1.2B)](https://huggingface.co/bigscience/mt0-large) | 22.0 | 23.4 | 25.1 | 27.3 | 27.6 | 25.2 |
|
81 |
+
| [mT0<sub>xl</sub> (3.7B)](https://huggingface.co/bigscience/mt0-xl) | 31.4 | 42.9 | 41.0 | 47.8 | 35.7 | 38.2 |
|
82 |
+
| [mT0<sub>xxl</sub> (13B)](https://huggingface.co/bigscience/mt0-xxl) | 33.5 | 46.2 | 47.9 | 52.6 | **39.6** | 42.5 |
|
83 |
+
| [LLaMA (7B)](https://arxiv.org/abs/2302.13971) | 22.8 | 23.1 | 25.1 | 26.7 | 27.6 | 25.3 |
|
84 |
+
| [LLaMA (13B)](https://arxiv.org/abs/2302.13971) | 24.1 | 23.0 | 24.4 | 29.5 | 26.7 | 25.3 |
|
85 |
+
| [LLaMA (30B)](https://arxiv.org/abs/2302.13971) | 25.4 | 23.5 | 25.9 | 28.4 | 28.7 | 26.5 |
|
86 |
+
| [LLaMA (65B)](https://arxiv.org/abs/2302.13971) | 33.0 | 37.7 | 40.8 | 41.4 | 32.1 | 35.8 |
|
87 |
+
| [Bactrian-X-LLaMA (7B)](https://github.com/mbzuai-nlp/bactrian-x) | 23.3 | 24.0 | 26.0 | 26.1 | 27.5 | 25.7 |
|
88 |
+
| [Bactrian-X-LLaMA (13B)](https://github.com/mbzuai-nlp/bactrian-x) | 28.3 | 29.9 | 32.8 | 35.2 | 29.2 | 30.3 |
|
89 |
+
|
90 |
+
#### GPT-3.5 performance (% accuracy) across different education levels
|
91 |
+
|
92 |
+
<p align="left">
|
93 |
+
<img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-result.png?raw=true" style="width: 370px;" id="title-icon">
|
94 |
+
</p>
|
95 |
+
|
96 |
+
Red indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.
|
97 |
+
|
98 |
+
#### Few-shot Evaluation
|
99 |
+
|
100 |
+
<p align="left">
|
101 |
+
<img src="https://github.com/fajri91/eval_picts/blob/master/plot_fewshot.png?raw=true" style="width: 380px;" id="title-icon">
|
102 |
+
</p>
|
103 |
+
|
104 |
+
## Data
|
105 |
+
Each question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.
|
106 |
+
We provide our dataset according to each subject in [data](data) folder. You can also access our dataset via [Hugging Face](https://huggingface.co/datasets/indolem/indommlu).
|
107 |
+
|
108 |
+
<!--
|
109 |
+
#### Quick Use
|
110 |
+
|
111 |
+
Our dataset has been added to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [OpenCompass](https://github.com/InternLM/opencompass), you can evaluate your model via these open-source tools.
|
112 |
+
-->
|
113 |
+
|
114 |
+
#### Evaluation
|
115 |
+
The code for the evaluation of each model we used is in `evaluate.py`, and the code to run them is listed in `run.sh`.
|
116 |
+
|
117 |
+
## Citation
|
118 |
+
```
|
119 |
+
@inproceedings{koto-etal-2023-indommlu,
|
120 |
+
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
|
121 |
+
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
|
122 |
+
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
|
123 |
+
month = December,
|
124 |
+
year = "2023",
|
125 |
+
address = "Singapore",
|
126 |
+
publisher = "Association for Computational Linguistics",
|
127 |
+
}
|
128 |
+
```
|
129 |
+
|
130 |
+
## License
|
131 |
+
|
132 |
+
The IndoMMLU dataset is licensed under a
|
133 |
+
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|