Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
pszemraj's picture
Update README.md
9193e56 verified
|
raw
history blame
1.78 kB
metadata
dataset_info:
  - config_name: deduped
    features:
      - name: title
        dtype: string
      - name: body
        dtype: string
      - name: subreddit
        dtype: string
    splits:
      - name: train
        num_bytes: 87280734834
        num_examples: 121344087
    download_size: 58748515490
    dataset_size: 87280734834
  - config_name: default
    features:
      - name: title
        dtype: string
      - name: body
        dtype: string
      - name: subreddit
        dtype: string
    splits:
      - name: train
        num_bytes: 93764255230
        num_examples: 127445911
    download_size: 62576730319
    dataset_size: 93764255230
  - config_name: mini
    features:
      - name: title
        dtype: string
      - name: body
        dtype: string
      - name: subreddit
        dtype: string
      - name: cluster_id
        dtype: int64
    splits:
      - name: train
        num_bytes: 1842483920
        num_examples: 2487046
    download_size: 1172276509
    dataset_size: 1842483920
configs:
  - config_name: deduped
    data_files:
      - split: train
        path: deduped/train-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
  - config_name: mini
    data_files:
      - split: train
        path: mini/train-*
license: odc-by
task_categories:
  - text-generation
  - text2text-generation

reddit-title-body-hf

sentence-transformers/reddit-title-body in parquet format

additional configs

  • the deduped config, which has the body col deduped via minhash
  • the mini config, which is a ~1 GB version of the deduped dataset created via a minipile-like clustering+sampling approach