Sequence "packing" logic
#2
by
pietrolesci
- opened
Hi there,
I am trying to understand how the tokenized Pile dataset has been "packed" into sequences of fixed length (2049 tokens). I started wondering about this when noticing (from a small sample) that the sequences are not concatenated using the <EOS>
token, which is something I though was done by default.
Thanks a lot for your help!
pietrolesci
changed discussion title from
Data preparation implementation
to Sequence "packing" logic
Linking related GitHub issues: https://github.com/EleutherAI/pythia/issues/123#issuecomment-1882232326
Hi are there any updates on this? The dataset has not been updated since and I can confirm that token id 0 (which relates to the EOD token) is not present in the data