File size: 1,958 Bytes
d91cbe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f28721
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: text
    dtype: string
  - name: meta
    struct:
    - name: warc_headers
      struct:
      - name: warc-record-id
        dtype: string
      - name: warc-date
        dtype: string
      - name: content-type
        dtype: string
      - name: content-length
        dtype: int32
      - name: warc-type
        dtype: string
      - name: warc-identified-content-language
        dtype: string
      - name: warc-refers-to
        dtype: string
      - name: warc-target-uri
        dtype: string
      - name: warc-block-digest
        dtype: string
    - name: identification
      struct:
      - name: label
        dtype: string
      - name: prob
        dtype: float32
    - name: annotations
      sequence: string
    - name: line_identifications
      list:
      - name: label
        dtype: string
      - name: prob
        dtype: float32
  - name: perplexity_score
    dtype: float64
  - name: text_length
    dtype: int64
  - name: url
    dtype: string
  - name: domain
    dtype: string
  - name: dup_ratio
    dtype: float64
  - name: pairs
    sequence:
      sequence: int64
  - name: repetitions
    sequence: binary
  - name: included_in_dedup
    dtype: bool
  - name: cluster
    sequence: int64
  - name: has_dup_25
    dtype: bool
  splits:
  - name: train
    num_bytes: 3188540880787
    num_examples: 431992659
  download_size: 1732364041898
  dataset_size: 3188540880787
---

Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.