Muennighoff's picture
Update README.md
5f28721
|
raw
history blame
1.96 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: text
      dtype: string
    - name: meta
      struct:
        - name: warc_headers
          struct:
            - name: warc-record-id
              dtype: string
            - name: warc-date
              dtype: string
            - name: content-type
              dtype: string
            - name: content-length
              dtype: int32
            - name: warc-type
              dtype: string
            - name: warc-identified-content-language
              dtype: string
            - name: warc-refers-to
              dtype: string
            - name: warc-target-uri
              dtype: string
            - name: warc-block-digest
              dtype: string
        - name: identification
          struct:
            - name: label
              dtype: string
            - name: prob
              dtype: float32
        - name: annotations
          sequence: string
        - name: line_identifications
          list:
            - name: label
              dtype: string
            - name: prob
              dtype: float32
    - name: perplexity_score
      dtype: float64
    - name: text_length
      dtype: int64
    - name: url
      dtype: string
    - name: domain
      dtype: string
    - name: dup_ratio
      dtype: float64
    - name: pairs
      sequence:
        sequence: int64
    - name: repetitions
      sequence: binary
    - name: included_in_dedup
      dtype: bool
    - name: cluster
      sequence: int64
    - name: has_dup_25
      dtype: bool
  splits:
    - name: train
      num_bytes: 3188540880787
      num_examples: 431992659
  download_size: 1732364041898
  dataset_size: 3188540880787

Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.