Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- vi
|
4 |
+
license: apache-2.0
|
5 |
+
tags:
|
6 |
+
- vietnamese
|
7 |
+
- text
|
8 |
+
- corpus
|
9 |
+
size_categories:
|
10 |
+
- 10M<n<100M
|
11 |
+
---
|
12 |
+
|
13 |
+
# Vietnamese Combined Corpus
|
14 |
+
|
15 |
+
## Dataset Statistics
|
16 |
+
- Total documents: {<15M:,}
|
17 |
+
- Wikipedia articles: {>1.3M:,}
|
18 |
+
- News articles: {>13M:,}
|
19 |
+
- Text documents: {>200K:,}
|
20 |
+
|
21 |
+
## Processing Details
|
22 |
+
- Processed using Apache Spark
|
23 |
+
- Minimum document length: {self.min_doc_length} characters
|
24 |
+
- Text cleaning applied:
|
25 |
+
- HTML/special character removal
|
26 |
+
- Whitespace normalization
|
27 |
+
- URL removal
|
28 |
+
- Empty document filtering
|
29 |
+
|
30 |
+
## Data Format
|
31 |
+
Each document has:
|
32 |
+
- 'text': The document content
|
33 |
+
- 'source': Origin of the document (wikipedia/news/text)
|
34 |
+
|
35 |
+
## Usage Example
|
36 |
+
from datasets import load_dataset
|
37 |
+
|
38 |
+
# Load full dataset
|
39 |
+
dataset = load_dataset("{username}/{dataset_name}")
|
40 |
+
|
41 |
+
# Stream dataset (memory efficient)
|
42 |
+
dataset = load_dataset(
|
43 |
+
"{username}/{dataset_name}",
|
44 |
+
streaming=True
|
45 |
+
)
|
46 |
+
|
47 |
+
# Filter by source
|
48 |
+
wiki_docs = dataset.filter(lambda x: x["source"] == "wikipedia")
|
49 |
+
|
50 |
+
## Updates
|
51 |
+
Released: 2024-12-17
|