and111's picture
Update README.md
46fe70d
|
raw
history blame
858 Bytes

Dataset Summary

Input data for the second phase of BERT pretraining (sequence length 512). All text is tokenized with bert-base-uncased tokenizer. Data is obtained by concatenating and shuffling wikipedia (split: 20220301.en) and bookcorpusopen datasets and running reference BERT data preprocessor without masking and input duplication (dupe_factor = 1). Documents are split into sentences with the NLTK sentence tokenizer (nltk.tokenize.sent_tokenize).

See the dataset for the first phase of pretraining: bert_pretrain_phase1.