|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: timestamp |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 137146387873 |
|
num_examples: 18507273 |
|
- name: validation |
|
num_bytes: 138079468 |
|
num_examples: 18392 |
|
download_size: 4087107539 |
|
dataset_size: 137284467341 |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- hi |
|
--- |
|
# Dataset Card for "mC4-hindi" |
|
|
|
This dataset is a subset of the mC4 dataset, which is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. It contains natural text in 101 languages, including Hindi. This dataset is specifically focused on Hindi text, and contains a variety of different types of text, including news articles, blog posts, and social media posts. |
|
|
|
This dataset is intended to be used for training and evaluating natural language processing models for Hindi. It can be used for a variety of tasks, such as pretraining language models, machine translation, text summarization, and question-answering. |
|
|
|
**Data format** |
|
|
|
The dataset is in JSONL format. Each line in the file contains a JSON object with the following fields: |
|
|
|
* `text`: field contains the text of the document. |
|
* `timestamp`: field contains the date and time when the document was crawled. |
|
* `url`: field contains the URL of the document. |
|
|
|
**Data splits** |
|
|
|
The dataset is split into two parts: train and validation. The train split contains 90% of the data, the validation split contains 5% of the data, and the test split contains 5% of the data. |
|
|
|
**Usage** |
|
|
|
To use the dataset, you can load it into a Hugging Face Dataset object using the following code: |
|
|
|
```python |
|
import datasets |
|
|
|
dataset = datasets.load_dataset("zicsx/mC4-hindi") |
|
``` |
|
|
|
Once you have loaded the dataset, you can access the train and validation splits using the following code: |
|
|
|
```python |
|
train_dataset = dataset["train"] |
|
validation_dataset = dataset["validation"] |
|
``` |
|
|
|
You can then use the dataset to train and evaluate your natural language processing model. |
|
|