Datasets:
metadata
language:
- vi
license: apache-2.0
tags:
- vietnamese
- text
- corpus
size_categories:
- 10M<n<100M
Vietnamese Combined Corpus
Dataset Statistics
- Total documents: {<15M:,}
- Wikipedia articles: {>1.3M:,}
- News articles: {>13M:,}
- Text documents: {>200K:,}
Processing Details
- Processed using Apache Spark
- Minimum document length: {10} characters
- Text cleaning applied:
- HTML/special character removal
- Whitespace normalization
- URL removal
- Empty document filtering
Data Format
Each document has:
- 'text': The document content
- 'source': Origin of the document (wikipedia/news/text)
Usage Example
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("{username}/{dataset_name}")
# Filter by source
wiki_docs = dataset.filter(lambda x: x["source"] == "wikipedia")
Updates
Released: 2024-12-17