pszemraj's picture
Upload README.md with huggingface_hub
c298951
|
raw
history blame
6.91 kB
metadata
license: odc-by
task_categories:
  - text-generation
dataset_info:
  - config_name: all
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
      - name: lang
        struct:
          - name: lang
            dtype: string
          - name: score
            dtype: float64
    splits:
      - name: test
        num_bytes: 193194594
        num_examples: 4334
      - name: validation
        num_bytes: 212130344
        num_examples: 4328
      - name: train
        num_bytes: 7447180733
        num_examples: 159185
    download_size: 2900856885
    dataset_size: 7852505671
  - config_name: doc
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 3241200340.5694714
        num_examples: 62821
      - name: validation
        num_bytes: 85285241.60649045
        num_examples: 1653
      - name: test
        num_bytes: 85336835.82403822
        num_examples: 1654
    download_size: 1309641319
    dataset_size: 3411822418
  - config_name: docx
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 4605598.853503184
        num_examples: 141
      - name: validation
        num_bytes: 261310.57324840763
        num_examples: 8
      - name: test
        num_bytes: 261310.57324840763
        num_examples: 8
    download_size: 1788590
    dataset_size: 5128220
  - config_name: logs
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 2350324475.916881
        num_examples: 9223
      - name: validation
        num_bytes: 61924411.541559376
        num_examples: 243
      - name: test
        num_bytes: 61924411.541559376
        num_examples: 243
    download_size: 718096901
    dataset_size: 2474173298.9999995
  - config_name: ppt
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: validation
        num_bytes: 11813294
        num_examples: 1230
      - name: train
        num_bytes: 426593595
        num_examples: 43706
      - name: test
        num_bytes: 12242562
        num_examples: 1232
    download_size: 232304159
    dataset_size: 450649451
  - config_name: pptx
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 9517778
        num_examples: 963
      - name: validation
        num_bytes: 513930
        num_examples: 53
      - name: test
        num_bytes: 436852
        num_examples: 54
    download_size: 5314310
    dataset_size: 10468560
  - config_name: rtf
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 61558658.13180516
        num_examples: 942
      - name: validation
        num_bytes: 3398142.4871060173
        num_examples: 52
      - name: test
        num_bytes: 3463491.3810888254
        num_examples: 53
    download_size: 22547280
    dataset_size: 68420292
  - config_name: txt
    features:
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 1358006724.1432111
        num_examples: 41393
      - name: validation
        num_bytes: 35727522.10740843
        num_examples: 1089
      - name: test
        num_bytes: 35760329.749380335
        num_examples: 1090
    download_size: 608912009
    dataset_size: 1429494576
configs:
  - config_name: all
    data_files:
      - split: test
        path: all/test-*
      - split: validation
        path: all/validation-*
      - split: train
        path: all/train-*
  - config_name: doc
    data_files:
      - split: train
        path: doc/train-*
      - split: validation
        path: doc/validation-*
      - split: test
        path: doc/test-*
  - config_name: docx
    data_files:
      - split: train
        path: docx/train-*
      - split: validation
        path: docx/validation-*
      - split: test
        path: docx/test-*
  - config_name: logs
    data_files:
      - split: train
        path: logs/train-*
      - split: validation
        path: logs/validation-*
      - split: test
        path: logs/test-*
  - config_name: ppt
    data_files:
      - split: validation
        path: ppt/validation-*
      - split: train
        path: ppt/train-*
      - split: test
        path: ppt/test-*
  - config_name: pptx
    data_files:
      - split: train
        path: pptx/train-*
      - split: validation
        path: pptx/validation-*
      - split: test
        path: pptx/test-*
  - config_name: rtf
    data_files:
      - split: train
        path: rtf/train-*
      - split: validation
        path: rtf/validation-*
      - split: test
        path: rtf/test-*
  - config_name: txt
    data_files:
      - split: train
        path: txt/train-*
      - split: validation
        path: txt/validation-*
      - split: test
        path: txt/test-*

govdocs1 Dataset: By File Extension

Markdown-parsed versions of documents in govdocs1 with light filtering.

Usage

Load specific file formats (e.g., .doc files) parsed to markdown with pandoc:

from datasets import load_dataset

# Replace "doc" with desired config name
dataset = load_dataset("BEE-spoke-data/govdocs1-by-extension", "doc")

Configurations

This dataset includes multiple configurations, each corresponding to a different file extension:

  • doc
  • docx
  • logs
  • ppt
  • pptx
  • rtf
  • txt

Each configuration contains train, validation, and test splits.

Dataset Details

  • Download Size: Varies by configuration
  • Dataset Size: Varies by configuration
  • Splits: Train, Validation, Test
  • Features: Section, Filename, Text

counts

Here's a summary of the number of examples for each configuration in the dataset (thus far):

  1. DOC Configuration

    • Train Examples: 38,094
    • Validation Examples: 1,002
    • Test Examples: 1,003
  2. DOCX Configuration

    • Train Examples: 141
    • Validation Examples: 8
    • Test Examples: 8
  3. Logs Configuration

    • Train Examples: 9,223
    • Validation Examples: 243
    • Test Examples: 243
  4. PPT Configuration

    • Train Examples: 13,865
    • Validation Examples: 365
    • Test Examples: 365
  5. PPTX Configuration

    • Train Examples: 963
    • Validation Examples: 53
    • Test Examples: 54
  6. RTF Configuration

    • Train Examples: 942
    • Validation Examples: 52
    • Test Examples: 53
  7. TXT Configuration

    • Train Examples: 41,393
    • Validation Examples: 1,089
    • Test Examples: 1,090

Citation

@inproceedings{garfinkel2009bringing,
  title={Bringing Science to Digital Forensics with Standardized Forensic Corpora},
  author={Garfinkel, Simson and others},
  booktitle={Digital Forensic Research Workshop (DFRWS) 2009},
  year={2009},
  address={Montreal, Canada},
  url={https://digitalcorpora.org/corpora/file-corpora/files/}
}

For more detailed information on each configuration, refer to the dataset documentation.