Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Spanish
ArXiv:
Libraries:
Datasets
Dask
red_pajama_es_hq / README.md
ouhenio's picture
Update README.md
1cbcda2 verified
|
raw
history blame
2.16 kB
metadata
language:
  - es
dataset_info:
  features:
    - name: text
      dtype: string
    - name: meta
      dtype: string
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
    - name: train
      num_bytes: 1201679966776
      num_examples: 128920537
  download_size: 700567029628
  dataset_size: 1201679966776
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

RedPajama's High Quality Spanish subset

What is this?

The following is a high-quality dataset distilled from the Spanish subsection of RedPajama-Data-v2, created using the methodology proposed in FineWEB-Edu.

Usage

from datasets import load_dataset

ds = load_dataset("latam-gpt/red_pajama_es_hq") 

Filtering by quality score

Documents in this corpus are scored on academic quality from 2.5 to 5, with higher scores indicating better quality. The dataset can be filtered by score using standard filtering methods.

from datasets import load_dataset

ds = load_dataset("latam-gpt/red_pajama_es_hq")

# filter the dataset for scores > 3
filtered_ds = ds.filter(lambda x: x['score'] > 3)

Dataset creation

In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than an GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it.

Here is an overview of the architecture:

image/png

For more detailed information on how this dataset was created, refer to our open implementation.

License

Please refer to the Common Crawl Foundation Terms of Use for the data. The code used to load and process the dataset is licensed under the Apache 2.0 license.