--- language: - vi pretty_name: Images and corresponding abstracts in Vietnamese Wikipedia source_datasets: - original size_categories: - 100M This dataset comprises nearly 380.000 pairs of images and corresponding textual abstracts extracted from Vietnamese Wikipedia articles. The dataset is designed to facilitate research and development in the field of multimodal learning, particularly in tasks that involve understanding and processing both textual and visual information. Description: - Total Images: 374751 - Total Textual Abstracts: 374748 Dataset Composition: - Each entry in the dataset consists of an image along with the corresponding abstract text extracted from the introductory section of Vietnamese Wikipedia articles.
- The images are diverse in content, ranging from objects and scenes to landmarks and people, providing a rich and varied set of visual information. ### Data Collection: The dataset was curated by combining 2 methods: - Extracting and filtering abstracts text directly from XML Wikimedia dump file. - Scraping Vietnamese Wikipedia articles, focusing on the introductory paragraphs known as abstracts. These abstracts serve as concise summaries of the corresponding articles, providing context and key information related to the image. ### Intended Use: Researchers and developers can utilize this dataset for various tasks such as: - Multimodal learning: Training models to understand and generate descriptions for both images and text. - Image captioning: Generating descriptive captions for images. - Visual question answering (VQA): Developing models that can answer questions about visual content. - Cross-modal retrieval: Matching images to their corresponding textual abstracts and vice versa. ### Data Preprocessing: - Image Format: The images are provided in a standardized JPG format. - Text Preprocessing: The textual abstracts have undergone basic preprocessing steps such as removal of unnecessary brackets which are mainly use in XML, removal of unknown character such as: '\u00A0', removal of the tagging of comment: [1],[2],[3],..., removal of unnecessary empty lines inside each text,.... ### Potential Challenges: - Language Complexity: As abstracts are extracted from Wikipedia, the text might include complex vocabulary and diverse topics. - Ambiguity: Some abstracts may contain ambiguous or figurative language, challenging comprehension. - Image Quality: Variation in image quality and resolution may impact model performance. - Text length imbalance: the longest text has the length of 8903 whereas the shortest is 1. This can create a situation of highly ram usage with using LSTM model,etc..