Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Spanish
ArXiv:
Libraries:
Datasets
Dask
File size: 2,957 Bytes
98bf035
 
 
310f004
 
88a5255
310f004
384996b
 
310f004
 
 
 
 
 
384996b
 
 
 
310f004
 
 
 
 
98bf035
 
ebe4fb2
98bf035
9a9378d
 
ebe4fb2
98bf035
1cbcda2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a9378d
 
d328acd
98bf035
 
 
813b31c
 
 
98bf035
ebe4fb2
9a9378d
f77f816
 
 
 
9a9378d
 
8073bef
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
language:
- es
dataset_info:
  features:
  - name: text
    dtype: string
  - name: meta
    dtype: string
  - name: score
    dtype: float64
  - name: int_score
    dtype: int64
  splits:
  - name: train
    num_bytes: 1201679966776
    num_examples: 128920537
  download_size: 700567029628
  dataset_size: 1201679966776
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# RedPajama's High Quality Spanish subset

## What is this?

The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed in [FineWEB-Edu](https://arxiv.org/abs/2406.17557).

## Usage

```python
from datasets import load_dataset

ds = load_dataset("latam-gpt/red_pajama_es_hq") 
```

### Filtering by quality score

Documents in this corpus are scored on academic quality from 2.5 to 5, with higher scores indicating better quality. The dataset can be filtered by score using standard filtering methods.

```python
from datasets import load_dataset

ds = load_dataset("latam-gpt/red_pajama_es_hq")

# filter the dataset for scores > 3
filtered_ds = ds.filter(lambda x: x['score'] > 3)
```

## Dataset creation

In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than a GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it.

Here is an overview of the architecture:

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61b15c3f20037ec5d7c91aa6/H5xPOHy_4RhMEDtGvsnTE.png" width="400">
</div>

For more detailed information on how this dataset was created, refer to [our open implementation](https://github.com/latam-gpt/llm-data-eval).

## What is Latam-GPT?

[Latam-GPT](https://www.latamgpt.org/) is a Latin American initiative to develop a large language model built entirely in the region. The project encompasses all development stages — from data collection and pre-training to final model refinement — making it the first foundation model created completely within Latin America.

## License

The text documents of the source database (RedPajama-Data-v2) were collected using 84 CommonCrawl snapshots, processed using the CCNet pipeline, and also provided under an Apache 2.0 license by the Together Computer team under the jurisdiction of the United States of America.
There may be differences between the jurisdiction of the USA and Latin American countries. In order to comply with the terms of use of the Common Crawl Foundation and in the search for the greatest possible transparency, we provide the following contact to ask any questions, comments or complaints: eugenio.herrera@cenia.cl.