Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,668 Bytes
2413b3c
985325b
2413b3c
 
985325b
 
 
 
 
 
 
 
 
2413b3c
985325b
2413b3c
 
985325b
2d5acf9
 
 
985325b
 
 
 
 
 
 
2d5acf9
 
985325b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d5acf9
985325b
 
 
 
 
2d5acf9
 
 
985325b
2d5acf9
 
985325b
 
 
 
 
 
 
5f5d2cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
985325b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-4.0
dataset_info: null
configs:
- config_name: CulturalBench-Hard
  default: true
  data_files:
  - split: test
    path: CulturalBench-Hard.csv
- config_name: CulturalBench-Easy
  data_files:
  - split: test
    path: CulturalBench-Easy.csv
size_categories:
- 1K<n<10K
pretty_name: CulturalBench
---
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
## **๐Ÿ“Œ Resources:** [Paper](https://arxiv.org/pdf/2410.02677) | [Leaderboard](https://huggingface.co/spaces/kellycyy/CulturalBench)

## ๐Ÿ“˜ Description of CulturalBench
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMsโ€™ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.
  1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total.
  2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.

- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677).


### ๐ŸŒŽ Country distribution
  | Continent             | Num of questions | Included Country/Region                                        |
  |-----------------------|------------------|----------------------------------------------------------------|
  | North America         | 27               | Canada; United States                                          |
  | South America         | 150              | Argentina; Brazil; Chile; Mexico; Peru                         |
  | East Europe           | 115              | Czech Republic; Poland; Romania; Ukraine; Russia               |
  | South Europe          | 76               | Spain; Italy                                                   |
  | West Europe           | 96               | France; Germany; Netherlands; United Kingdom                   |
  | Africa                | 134              | Egypt; Morocco; Nigeria; South Africa; Zimbabwe                |
  | Middle East/West Asia | 127              | Iran; Israel; Lebanon; Saudi Arabia; Turkey                    |
  | South Asia            | 106              | Bangladesh; India; Nepal; Pakistan                             |
  | Southeast Asia        | 159              | Indonesia; Malaysia; Philippines; Singapore; Thailand; Vietnam |
  | East Asia             | 211              | China; Hong Kong; Japan; South Korea; Taiwan                   |
  | Oceania               | 26               | Australia; New Zealand                                         |


## ๐Ÿฅ‡ Leaderboard of CulturalBench
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench).
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference). 
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.


## ๐Ÿ“– Example of CulturalBench
- Examples of questions in two setups:
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fcaae6e5dc5b0ec1b726cf/4LU3Ofl9lzeJGVME3yBMp.png)

  
## ๐Ÿ’ป How to load the datasets

```
from datasets import load_dataset

ds_hard = load_dataset("kellycyy/CulturalBench", "CulturalBench-Hard")
ds_easy = load_dataset("kellycyy/CulturalBench", "CulturalBench-Easy")

```

## Contact

E-Mail: [Kelly Chiu](mailto:kellycyy@uw.edu)

## Citation

If you find this dataset useful, please cite the following works

```bibtex
@misc{chiu2024culturalbenchrobustdiversechallenging,
      title={CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs}, 
      author={Yu Ying Chiu and Liwei Jiang and Bill Yuchen Lin and Chan Young Park and Shuyue Stella Li and Sahithya Ravi and Mehar Bhatia and Maria Antoniak and Yulia Tsvetkov and Vered Shwartz and Yejin Choi},
      year={2024},
      eprint={2410.02677},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.02677}, 
}
```