update readme
Browse files
README.md
CHANGED
@@ -16,16 +16,18 @@ size_categories:
|
|
16 |
pretty_name: CulturalBench
|
17 |
---
|
18 |
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
|
|
|
|
|
|
|
19 |
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMsโ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.
|
20 |
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.
|
21 |
1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total.
|
22 |
2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.
|
23 |
|
24 |
- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677).
|
25 |
-
- Examples of questions in two setups:
|
26 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fcaae6e5dc5b0ec1b726cf/4LU3Ofl9lzeJGVME3yBMp.png)
|
27 |
|
28 |
-
|
|
|
29 |
| Continent | Num of questions | Included Country/Region |
|
30 |
|-----------------------|------------------|----------------------------------------------------------------|
|
31 |
| North America | 27 | Canada; United States |
|
@@ -41,14 +43,18 @@ pretty_name: CulturalBench
|
|
41 |
| Oceania | 26 | Australia; New Zealand |
|
42 |
|
43 |
|
44 |
-
## Leaderboard of CulturalBench
|
45 |
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench).
|
46 |
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference).
|
47 |
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.
|
48 |
|
49 |
|
|
|
|
|
|
|
50 |
|
51 |
-
|
|
|
52 |
|
53 |
```
|
54 |
from datasets import load_dataset
|
|
|
16 |
pretty_name: CulturalBench
|
17 |
---
|
18 |
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
|
19 |
+
## **๐ Resources:** [Paper](https://arxiv.org/pdf/2410.02677) | [Leaderboard](https://huggingface.co/spaces/kellycyy/CulturalBench)
|
20 |
+
|
21 |
+
## ๐ Description of CulturalBench
|
22 |
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMsโ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.
|
23 |
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.
|
24 |
1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total.
|
25 |
2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.
|
26 |
|
27 |
- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677).
|
|
|
|
|
28 |
|
29 |
+
|
30 |
+
### ๐ Country distribution
|
31 |
| Continent | Num of questions | Included Country/Region |
|
32 |
|-----------------------|------------------|----------------------------------------------------------------|
|
33 |
| North America | 27 | Canada; United States |
|
|
|
43 |
| Oceania | 26 | Australia; New Zealand |
|
44 |
|
45 |
|
46 |
+
## ๐ฅ Leaderboard of CulturalBench
|
47 |
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench).
|
48 |
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference).
|
49 |
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.
|
50 |
|
51 |
|
52 |
+
## ๐ Example of CulturalBench
|
53 |
+
- Examples of questions in two setups:
|
54 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fcaae6e5dc5b0ec1b726cf/4LU3Ofl9lzeJGVME3yBMp.png)
|
55 |
|
56 |
+
|
57 |
+
## ๐ป How to load the datasets
|
58 |
|
59 |
```
|
60 |
from datasets import load_dataset
|