--- language: - es dataset_info: features: - name: text dtype: string - name: meta dtype: string - name: score dtype: float64 - name: int_score dtype: int64 splits: - name: train num_bytes: 1201679966776 num_examples: 128920537 download_size: 700567029628 dataset_size: 1201679966776 configs: - config_name: default data_files: - split: train path: data/train-* --- # RedPajama's High Quality Spanish subset ## What is this? The following is a high-quality dataset distilled from the Spanish subsection of [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data), created using the methodology proposed in [FineWEB-Edu](https://arxiv.org/abs/2406.17557). ## Usage ```python from datasets import load_dataset ds = load_dataset("latam-gpt/red_pajama_es_hq") ``` ### Filtering by quality score Documents in this corpus are scored on academic quality from 2.5 to 5, with higher scores indicating better quality. The dataset can be filtered by score using standard filtering methods. ```python from datasets import load_dataset ds = load_dataset("latam-gpt/red_pajama_es_hq") # filter the dataset for scores > 3 filtered_ds = ds.filter(lambda x: x['score'] > 3) ``` ## Dataset creation In a nutshell, we use Llama-3.1-70B to grade the educational quality of 550k samples from the original dataset. Then, we used these samples to train a encoder-based classifier, so that it learns to assign a score from 0 to 5. Since this model is cheaper to use than a GPT, we can run it at scale over the entire dataset, thus allowing us to filter a high-quality section from it. Here is an overview of the architecture: