language: | |
- en | |
license: odc-by | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: id | |
dtype: string | |
- name: dump | |
dtype: string | |
- name: url | |
dtype: string | |
- name: file_path | |
dtype: string | |
- name: language | |
dtype: string | |
- name: language_score | |
dtype: float64 | |
- name: token_count | |
dtype: int64 | |
- name: score | |
dtype: float64 | |
- name: int_score | |
dtype: int64 | |
splits: | |
- name: train | |
num_bytes: 2003951.6881090973 | |
num_examples: 394 | |
download_size: 2733459 | |
dataset_size: 2003951.6881090973 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
# FineWeb-Edu Micro | |
This dataset is a subset of the FineWeb-Edu Sample-10BT, which contains passages that are at least 1000 tokens long, totalling about 1 Million tokens . | |
This dataset was primarily made to evaluate different RAG Chunking mechanisms in [Chonkie](https://github.com/bhavnicksm/chonkie) | |