--- language: - vi pretty_name: Images and corresponding abstracts in Vietnamese Wikipedia source_datasets: - original size_categories: - 100K This dataset comprises nearly 380.000 pairs of images and corresponding textual abstracts extracted from Vietnamese Wikipedia articles. The dataset is designed to facilitate research and development in the field of multimodal learning, particularly in tasks that involve understanding and processing both textual and visual information. Description: - Total Images: 374748 - Total Textual Abstracts: 374748 Dataset Composition: - Each entry in the dataset consists of an image along with the corresponding abstract text extracted from the introductory section of Vietnamese Wikipedia articles.
- The images are diverse in content, ranging from objects and scenes to landmarks and people, providing a rich and varied set of visual information. ### Data Collection: The dataset was curated by combining 2 methods: - Extracting and filtering abstracts text directly from XML Wikimedia dump file. - Scraping Vietnamese Wikipedia articles, focusing on the introductory paragraphs known as abstracts. These abstracts serve as concise summaries of the corresponding articles, providing context and key information related to the image. ### Intended Use: Researchers and developers can utilize this dataset for various tasks such as: - Multimodal learning: Training models to understand and generate descriptions for both images and text. - Image captioning: Generating descriptive captions for images. - Visual question answering (VQA): Developing models that can answer questions about visual content. - Cross-modal retrieval: Matching images to their corresponding textual abstracts and vice versa. ### Data Preprocessing: - Image Format: The images are provided in a standardized JPG format. - Text Preprocessing: The textual abstracts have undergone basic preprocessing steps such as removal of unnecessary brackets which are mainly use in XML, removal of unknown character such as: '\u00A0', removal of the tagging of comment: [1],[2],[3],..., removal of unnecessary empty lines inside each text,.... ### Potential Challenges: - Language Complexity: As abstracts are extracted from Wikipedia, the text might include complex vocabulary and diverse topics. - Ambiguity: Some abstracts may contain ambiguous or figurative language, challenging comprehension. - Image Quality: Variation in image quality and resolution may impact model performance. - Text length imbalance: the longest text has the length of 8903 whereas the shortest is 1. This can create a situation of highly ram usage with using LSTM model,etc.. ### View dataset: There are 2 ways to load dataset: 1. Use datasets library instead of downloading the dataset to local ```python from datasets import load_dataset dataset = load_dataset("Seeker38/image_text_wikipedia_vi", split="train") ``` ##### you can use the link from this [Google Colab](https://colab.research.google.com/drive/1BOAEsiVXNGm__vhZ4v_oyqytweG3JTm_?usp=sharing) to see a little viewing demo. 2. For dataset that has been downloaded to local ```python import pandas as pd from datasets import Dataset parquet_file = 'articles_data.parquet' df = pd.read_parquet(parquet_file) # Convert the pandas DataFrame to a datasets.arrow_dataset.Dataset object dataset = Dataset.from_pandas(df) ``` To view the element's text ```python # Example: element number 3 dataset[3]["text"] ``` If you use the 2nd way, then to view,or even use for training the element's image, you need to contain the convertion step ```python from PIL import Image import io # Example: element number 3 image_bytes = dataset[3]["image"]["bytes"] # Convert bytes to Image image = Image.open(io.BytesIO(image_bytes)) image_rgb = image.convert("RGB") # some images have error: ValueError: Could not save to JPEG for display image_rgb ``` Else ```python dataset[2]["image"] ```