Datasets:
File size: 4,340 Bytes
7a7679c 64f2d98 7a7679c 64f2d98 4e31e3f 17b976e 7a7679c 2ca630c 7a7679c 142790a 7a7679c 2ca630c 7a7679c 2ca630c 7a7679c 64f2d98 187de42 152035c 187de42 142790a 187de42 152035c 187de42 152035c 187de42 152035c 187de42 152035c 187de42 152035c 187de42 152035c 142790a 187de42 b135a14 9a6ee92 152035c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
language:
- vi
pretty_name: Images and corresponding abstracts in Vietnamese Wikipedia
source_datasets:
- original
size_categories:
- 100K<n<1M
tags:
- wikipedia
- images
- text
- LM
dataset_info:
features:
- name: image
dtype: image
- name: title
dtype: string
- name: text
dtype: string
# splits:
# - name: train
---
# Dataset Card for image_text_wikipedia_vi
### Dataset Summary
Dataset Summary: Image-Text Wikipedia Abstracts (Vietnamese version) <br>
This dataset comprises nearly 380.000 pairs of images and corresponding textual abstracts extracted from Vietnamese Wikipedia articles. The dataset is designed to facilitate research and development in the field of multimodal learning, particularly in tasks that involve understanding and processing both textual and visual information.
Description:
- Total Images: 374748
- Total Textual Abstracts: 374748
Dataset Composition:
- Each entry in the dataset consists of an image along with the corresponding abstract text extracted from the introductory section of Vietnamese Wikipedia articles.<br>
- The images are diverse in content, ranging from objects and scenes to landmarks and people, providing a rich and varied set of visual information.
### Data Collection:
The dataset was curated by combining 2 methods:
- Extracting and filtering abstracts text directly from XML Wikimedia dump file.
- Scraping Vietnamese Wikipedia articles, focusing on the introductory paragraphs known as abstracts. These abstracts serve as concise summaries of the corresponding articles, providing context and key information related to the image.
### Intended Use:
Researchers and developers can utilize this dataset for various tasks such as:
- Multimodal learning: Training models to understand and generate descriptions for both images and text.
- Image captioning: Generating descriptive captions for images.
- Visual question answering (VQA): Developing models that can answer questions about visual content.
- Cross-modal retrieval: Matching images to their corresponding textual abstracts and vice versa.
### Data Preprocessing:
- Image Format: The images are provided in a standardized JPG format.
- Text Preprocessing: The textual abstracts have undergone basic preprocessing steps such as removal of unnecessary brackets which are mainly use in XML, removal of unknown character such as: '\u00A0', removal of the tagging of comment: [1],[2],[3],..., removal of unnecessary empty lines inside each text,....
### Potential Challenges:
- Language Complexity: As abstracts are extracted from Wikipedia, the text might include complex vocabulary and diverse topics.
- Ambiguity: Some abstracts may contain ambiguous or figurative language, challenging comprehension.
- Image Quality: Variation in image quality and resolution may impact model performance.
- Text length imbalance: the longest text has the length of 8903 whereas the shortest is 1. This can create a situation of highly ram usage with using LSTM model,etc..
### View dataset:
There are 2 ways to load dataset:
<b>1. Use datasets library instead of downloading the dataset to local</b>
```python
from datasets import load_dataset
dataset = load_dataset("Seeker38/image_text_wikipedia_vi", split="train")
```
##### you can use the link from this <b>[Google Colab](https://colab.research.google.com/drive/1BOAEsiVXNGm__vhZ4v_oyqytweG3JTm_?usp=sharing)</b> to see a little viewing demo.
<b>2. For dataset that has been downloaded to local</b>
```python
import pandas as pd
from datasets import Dataset
parquet_file = 'articles_data.parquet'
df = pd.read_parquet(parquet_file)
# Convert the pandas DataFrame to a datasets.arrow_dataset.Dataset object
dataset = Dataset.from_pandas(df)
```
<b>To view the element's text</b>
```python
# Example: element number 3
dataset[3]["text"]
```
<b>If you use the 2nd way, then to view,or even use for training the element's image, you need to contain the convertion step</b>
```python
from PIL import Image
import io
# Example: element number 3
image_bytes = dataset[3]["image"]["bytes"]
# Convert bytes to Image
image = Image.open(io.BytesIO(image_bytes))
image_rgb = image.convert("RGB") # some images have error: ValueError: Could not save to JPEG for display
image_rgb
```
<b>Else</b>
```python
dataset[2]["image"]
``` |