Datasets:

Modalities:
Text
Formats:
arrow
Libraries:
Datasets
deg / README.md
vsubasri's picture
Upload folder using huggingface_hub
3612e0e verified
metadata
configs:
  - config_name: single_256
    data_files:
      - path: single_256/data-00000-of-00001.arrow
        split: train
  - config_name: single_512
    data_files:
      - path: single_512/data-00000-of-00001.arrow
        split: train
  - config_name: single_1024
    data_files:
      - path: single_1024/data-00000-of-00001.arrow
        split: train
  - config_name: single_2048
    data_files:
      - path: single_2048/data-00000-of-00001.arrow
        split: train
  - config_name: single_4096
    data_files:
      - path: single_4096/data-00000-of-00001.arrow
        split: train
  - config_name: multiple_256
    data_files:
      - path: multiple_256/data-00000-of-00001.arrow
        split: train
  - config_name: multiple_512
    data_files:
      - path: multiple_512/data-00000-of-00001.arrow
        split: train
  - config_name: multiple_1024
    data_files:
      - path: multiple_1024/data-00000-of-00001.arrow
        split: train
  - config_name: multiple_2048
    data_files:
      - path: multiple_2048/data-00000-of-00001.arrow
        split: train
  - config_name: multiple_4096
    data_files:
      - path: multiple_4096/data-00000-of-00001.arrow
        split: train
  - config_name: delete_256
    data_files:
      - path: delete_256/data-00000-of-00001.arrow
        split: train
  - config_name: delete_512
    data_files:
      - path: delete_512/data-00000-of-00001.arrow
        split: train
  - config_name: delete_1024
    data_files:
      - path: delete_1024/data-00000-of-00001.arrow
        split: train
  - config_name: delete_2048
    data_files:
      - path: delete_2048/data-00000-of-00001.arrow
        split: train
  - config_name: delete_4096
    data_files:
      - path: delete_4096/data-00000-of-00001.arrow
        split: train

Dataset Organization

The DEG dataset is organized by subsets (single, multiple, delete) and context length. Each subset has multiple subsets corresponding to different context lengths.

Instructions to Download

You can load a dataset using HF's API, with an example below.

from datasets import load_dataset

deg_data = load_dataset("wanglab/deg", 'single_256')
# Access the train data for a specific context length
train_data = deg_data['train']