This dataset encompasses a diverse range of cultural concepts from five different languages and cultural backgrounds: Indonesian, Swahili, Tamil, Turkish, and Chinese. Specifically, it includes 236 concepts in Chinese, 128 in Indonesian, 202 in Swahili, 178 in Tamil, and 178 in Turkish, with each cultural concept represented by at least two images, totaling 2,235 high-quality images. Each culture comprises ten primary categories, covering festivals, music, religion and beliefs, animals and plants, food, clothing, architecture, agriculture, tools, and sports, thereby forming a broad representation of each culture. Concepts for each cultural category were sourced from multiple references, including cultural entries on Wikipedia, official national cultural websites, and culture-specific sites identified through search engines. Each cultural concept underwent rigorous selection and verification to ensure accuracy and relevance. In the final dataset, each concept was validated by evaluators from the respective cultural backgrounds, guaranteeing the dataset’s diversity and authenticity. This resource provides a solid foundation for research on cultural adaptation and cross-cultural understanding.
paper:https://aclanthology.org/2023.emnlp-main.18/
@inproceedings{li-zhang-2023-cultural, title = "Cultural Concept Adaptation on Multimodal Reasoning", author = "Li, Zhi and Zhang, Yin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.18", doi = "10.18653/v1/2023.emnlp-main.18", pages = "262--276", abstract = "Developing cultural adaptation methods is important, which can improve the model performance on the low-resource ones and provide more equitable opportunities for everyone to benefit from advanced technology. Past methods primarily focused on multilingual and multimodal capabilities, and the improvement of multicultural competence is still an unexplored problem. This is largely due to the difficulty of data scarcity and expensive annotation. In this paper, we navigate this uncharted territory by leveraging high-resource cultures to facilitate comprehension of low-resource ones. We first introduce an annotation-free method for cultural-concept adaptation and construct a concept mapping set. To facilitate the model{'}s comprehension of cultural-concept mappings, we propose a new multimodal data augmentation called CultureMixup. This approach employs a three-tier code-switching strategy on textual sentences. Additionally, it uses a cultural concept-based mixup method for the images. This combination effectively generates new data instances across culture, phrase, word, and image levels. For visually grounded reasoning across languages and cultures, experimental results on five languages show that our method consistently improves performance for four existing multilingual and multimodal models on both zero-shot and few-shot settings.", }
- Downloads last month
- 45