chanind commited on
Commit
557f4b3
1 Parent(s): 1f93e68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -15,3 +15,22 @@ configs:
15
  - split: train
16
  path: data/train-*
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - split: train
16
  path: data/train-*
17
  ---
18
+
19
+ # OpenWebTextCorpus tokenized for Llama 3
20
+
21
+ This dataset is a pre-tokenized version of the [Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext) dataset
22
+ using the [llama3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) tokenizer. As such, this dataset follows the same licensing as the original openwebtext dataset.
23
+
24
+ This pre-tokenization is done as a performance optimization for using the openwebtext dataset with a Llama3 model.
25
+ This dataset was created using [SAELens](https://github.com/jbloomAus/SAELens), with the following settings:
26
+
27
+ - context_size: 8192
28
+ - shuffled: true
29
+ - begin_batch_token: "bos"
30
+ - begin_sequence_token: null
31
+ - sequence_separator_token: "eos"
32
+ - sae_lens_version: "3.3.0"
33
+
34
+ The `eos` token was used as a separator between sequences, since this resulted in the lowest loss experimentally.
35
+ Ideally we would like to use the same tokenization settings as used by the original Llama3 training regime, so if
36
+ you have information that the original Llama3 was trained using a different tokenization setup, please reach out!