Muennighoff commited on
Commit
5f28721
1 Parent(s): d91cbe0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -69,6 +69,5 @@ dataset_info:
69
  download_size: 1732364041898
70
  dataset_size: 3188540880787
71
  ---
72
- # Dataset Card for "oscar-dedup-expanded"
73
 
74
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
69
  download_size: 1732364041898
70
  dataset_size: 3188540880787
71
  ---
 
72
 
73
+ Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.