Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
davidanugraha commited on
Commit
a018b7f
·
verified ·
1 Parent(s): 3c635c7

Update README.md but still need to describe how to eval

Browse files
Files changed (1) hide show
  1. README.md +13 -8
README.md CHANGED
@@ -60,6 +60,8 @@ configs:
60
 
61
  # WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
62
 
 
 
63
  WorldCuisines is a massive-scale visual question answering (VQA) benchmark for multilingual and multicultural understanding through global cuisines. The dataset contains text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark as of 17 October 2024.
64
 
65
  ## Overview
@@ -79,13 +81,7 @@ From **WC-KB**, we construct **WC-VQA**, a multilingual parallel VQA dataset wit
79
  We provide **WC-VQA** evaluation datasets in two sizes (12,000 and 60,000 instances) alongside a training dataset (1,080,000 instances).
80
  The table below provides more detailed statistics regarding the number of VQA instances and images for each data split.
81
 
82
- | Data Split | Task 1 (Dish Name) | Task 2 (Location) | Total VQA |
83
- |---------------------|--------------------------------------------------------------|-------------------|-----------|
84
- | | (a) no-context | (b) contextualized | (c) adversarial | | |
85
- | | # VQA | # Images | # VQA | # Images | # VQA | # Images | # VQA | # Images | |
86
- | **Train (1M)** | 270,300 | 3,383 | 267,930 | 3,555 | 271,770 | 3,589 | 270,000 | 3,361 | 1,080,000 |
87
- | **Test Small (12k)** | 3,000 | 100 | 3,000 | 100 | 3,000 | 100 | 3,000 | 100 | 12,000 |
88
- | **Test Large (60k)** | 15,000 | 500 | 15,000 | 500 | 15,000 | 499 | 15,000 | 499 | 60,000 |
89
 
90
  ## Dataset Construction
91
 
@@ -177,4 +173,13 @@ E-mail: [Genta Indra Winata](genta.winata@capitalone.com) and [Frederikus Hudi](
177
 
178
  ## Citation
179
 
180
- If you find this dataset useful, please cite the following works
 
 
 
 
 
 
 
 
 
 
60
 
61
  # WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
62
 
63
+ ![Overview](./images/tasks.png)
64
+
65
  WorldCuisines is a massive-scale visual question answering (VQA) benchmark for multilingual and multicultural understanding through global cuisines. The dataset contains text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark as of 17 October 2024.
66
 
67
  ## Overview
 
81
  We provide **WC-VQA** evaluation datasets in two sizes (12,000 and 60,000 instances) alongside a training dataset (1,080,000 instances).
82
  The table below provides more detailed statistics regarding the number of VQA instances and images for each data split.
83
 
84
+ ![StatisticsTable](./images/stats_table.png)
 
 
 
 
 
 
85
 
86
  ## Dataset Construction
87
 
 
173
 
174
  ## Citation
175
 
176
+ If you find this dataset useful, please cite the following works
177
+
178
+ ```bibtex
179
+ @inproceedings{Winata2024WorldCuisinesAM,
180
+ title={WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines},
181
+ author={Genta Indra Winata and Frederikus Hudi and Patrick Amadeus Irawan and David Anugraha and Rifki Afina Putri and Yutong Wang and Adam Nohejl and Ubaidillah Ariq Prathama and Nedjma Djouhra Ousidhoum and Afifa Amriani and Anar Y. Rzayev and Anirban Das and Ashmari Pramodya and Aulia Adila and Bryan Wilie and Candy Olivia Mawalim and Ching Lam Cheng and Daud Olamide Abolade and Emmanuele Chersoni and Enrico Santus and Fariz Ikhwantri and Garry Kuwanto and Hanyang Zhao and Haryo Akbarianto Wibowo and Holy Lovenia and Jan Christian Blaise Cruz and Jan Wira Gotama Putra and Junho Myung and Lucky Susanto and Maria Angelica Riera Machin and Marina Zhukova and Michael Anugraha and Muhammad Farid Adilazuarda and Natasha Santosa and Peerat Limkonchotiwat and Raj Dabre and Rio Alexander Audino and Samuel Cahyawijaya and Shi-Xiong Zhang and Stephanie Yulia Salim and Yi Zhou and Yinxuan Gui and David Ifeoluwa Adelani and En-Shiun Annie Lee and Shogo Okada and Ayu Purwarianti and Alham Fikri Aji and Taro Watanabe and Derry Tanti Wijaya and Alice Oh and Chong-Wah Ngo},
182
+ year={2024},
183
+ url={https://api.semanticscholar.org/CorpusID:273375521}
184
+ }
185
+ ```