Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,45 @@ dataset_info:
|
|
23 |
num_examples: 18392
|
24 |
download_size: 4087107539
|
25 |
dataset_size: 137284467341
|
|
|
|
|
|
|
|
|
|
|
26 |
---
|
27 |
# Dataset Card for "mC4-hindi"
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
num_examples: 18392
|
24 |
download_size: 4087107539
|
25 |
dataset_size: 137284467341
|
26 |
+
license: apache-2.0
|
27 |
+
task_categories:
|
28 |
+
- text-generation
|
29 |
+
language:
|
30 |
+
- hi
|
31 |
---
|
32 |
# Dataset Card for "mC4-hindi"
|
33 |
|
34 |
+
This dataset is a subset of the mC4 dataset, which is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. It contains natural text in 101 languages, including Hindi. This dataset is specifically focused on Hindi text, and contains a variety of different types of text, including news articles, blog posts, and social media posts.
|
35 |
+
|
36 |
+
This dataset is intended to be used for training and evaluating natural language processing models for Hindi. It can be used for a variety of tasks, such as pretraining language models, machine translation, text summarization, and question-answering.
|
37 |
+
|
38 |
+
**Data format**
|
39 |
+
|
40 |
+
The dataset is in JSONL format. Each line in the file contains a JSON object with the following fields:
|
41 |
+
|
42 |
+
* `text`: field contains the text of the document.
|
43 |
+
* `timestamp`: field contains the date and time when the document was crawled.
|
44 |
+
* `url`: field contains the URL of the document.
|
45 |
+
|
46 |
+
**Data splits**
|
47 |
+
|
48 |
+
The dataset is split into two parts: train and validation. The train split contains 90% of the data, the validation split contains 5% of the data, and the test split contains 5% of the data.
|
49 |
+
|
50 |
+
**Usage**
|
51 |
+
|
52 |
+
To use the dataset, you can load it into a Hugging Face Dataset object using the following code:
|
53 |
+
|
54 |
+
```python
|
55 |
+
import datasets
|
56 |
+
|
57 |
+
dataset = datasets.load_dataset("zicsx/mC4-hindi")
|
58 |
+
```
|
59 |
+
|
60 |
+
Once you have loaded the dataset, you can access the train and validation splits using the following code:
|
61 |
+
|
62 |
+
```python
|
63 |
+
train_dataset = dataset["train"]
|
64 |
+
validation_dataset = dataset["validation"]
|
65 |
+
```
|
66 |
+
|
67 |
+
You can then use the dataset to train and evaluate your natural language processing model.
|