oscar_subset / README.md
mittagessen's picture
Update README.md
45e5021 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 198162960699
      num_examples: 105014023
  download_size: 125747187034
  dataset_size: 198162960699
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
pretty_name: OSCAR 2023.1 subset
license: cc0-1.0
multilinguality:
  - multilingual
source_datasets:
  - oscar-corpus/OSCAR-2301
task_categories:
  - fill-mask
  - text-generation
task_ids:
  - language-modeling
paperswithcode_id: oscar
extra_gated_prompt: >-
  By filling the form below, you understand that only the metadata and the
  annotations of OSCAR 23.01 have a cc0-1.0 license, and that the rest of the
  content is crawled data derived from the November/December 2022 snapshot of
  Common Crawl, for which the authors of OSCAR **do not** hold any copyright
  whatsoever.
extra_gated_fields:
  Name: text
  Email: text
  Affiliation: text
  Country: text
  Usecase: text
  I have explicitly check with my jurisdiction and I confirm that downloading OSCAR 2301 is legal in the country/region where I am located right now, and for the use case that I have described above: checkbox
tags:
  - oscar

This dataset is a subset of OSCAR 2023.1 obtained by sampling randomly 50% of documents from the first 30 JSONL files for each language contained in the mother corpus, followed by truncating each document to the first 2048 Unicode code points. It thus contains all languages in OSCAR but drastically oversamples less frequent languages in comparison to larger ones.

Languages

For convenience the languages all files are shipped in a single folder and can be loaded together without manually loading invidividual languages.

Supported Tasks

This dataset is primarily intended for pretraining multilingual tiny language models with limited context length (~2048 for tokenization-free byte embeddings) such as ByteLlama.