Sequence "packing" logic

#2
by pietrolesci - opened
EleutherAI org

Hi there,

I am trying to understand how the tokenized Pile dataset has been "packed" into sequences of fixed length (2049 tokens). I started wondering about this when noticing (from a small sample) that the sequences are not concatenated using the <EOS> token, which is something I though was done by default.

Thanks a lot for your help!

pietrolesci changed discussion title from Data preparation implementation to Sequence "packing" logic
EleutherAI org

Hi are there any updates on this? The dataset has not been updated since and I can confirm that token id 0 (which relates to the EOD token) is not present in the data

Sign up or log in to comment