vietnamese-corpus / README.md
MRAGU's picture
Create README.md
0cf2a53 verified
|
raw
history blame
1 kB
metadata
language:
  - vi
license: apache-2.0
tags:
  - vietnamese
  - text
  - corpus
size_categories:
  - 10M<n<100M

Vietnamese Combined Corpus

Dataset Statistics

  • Total documents: {<15M:,}
    • Wikipedia articles: {>1.3M:,}
    • News articles: {>13M:,}
    • Text documents: {>200K:,}

Processing Details

  • Processed using Apache Spark
  • Minimum document length: {self.min_doc_length} characters
  • Text cleaning applied:
    • HTML/special character removal
    • Whitespace normalization
    • URL removal
    • Empty document filtering

Data Format

Each document has:

  • 'text': The document content
  • 'source': Origin of the document (wikipedia/news/text)

Usage Example

from datasets import load_dataset

Load full dataset

dataset = load_dataset("{username}/{dataset_name}")

Stream dataset (memory efficient)

dataset = load_dataset( "{username}/{dataset_name}", streaming=True )

Filter by source

wiki_docs = dataset.filter(lambda x: x["source"] == "wikipedia")

Updates

Released: 2024-12-17