Datasets:

ArXiv:

RPV2 ccnet preprocessing

#29
by bpwl0121 - opened

Hi,
you applied the ccnet and just take the head and middle part from the wiki quality classifier.
Do you use other rules to filter out the head and middle part, like the line dedup? BTW, could you share the thresholds of such rules?

Best,

Together org

Hi @bpwl0121 ,

you applied the ccnet and just take the head and middle part from the wiki quality classifier.

Yes that's correct, the ccnet pipeline creates three buckets (head, middle, and tail) based on the perplexity of a wikipedia refernce model.

Do you use other rules to filter out the head and middle part, like the line dedup? BTW, could you share the thresholds of such rules?

After ccnet and selecting the head+middle buckets for quality annotations, we did not apply further filtering. However, ccnet itself does the following filtering steps:

  • discards any document with language score < 0.5
  • deduplicates parapgraphs in shards. In our case, each dump is divided into 5k shards, within wich paragraphs are deduplicated.
mauriceweber changed discussion status to closed

Sign up or log in to comment