Search is not available for this dataset
input_ids
int32
0
28.1k
1
3,196
9,743
869
3,857
5,632
11,312
18
25
0
14,453
116
7
634
25
0
11
7,768
13
3,857
5,632
11,312
192
180
1,616
1,767
18
8
11
8,223
3,276
201
234
3,857
5,632
11,312
14,453
116
4,591
2,857
541
11
263
60
12,658
1,190
2,375
1,228
824
1,913
250
3,197
197
10,553
13
13,573
226
180
10,582
2,989
681
13
25,068
130
193
1,098
1,609
193
541
11
331
263
180
1,528
824
193
180
3,857
5,632
11,312
899
13
1,662
21,691
180
1,436
18,412
203
192
12,658
197
2,255
528
5,385
234
396
25,443
11
180
2,153

This is the tokenized data of salesforce/wikitext dataset. All the samples in the train set are concatenated for pretraining the llm.

To see how the tokenized dataset is created please see : https://github.com/SSahas/Implementing-LLM-From-Scratch/blob/main/assets/preprocessing.ipynb

PROJECT

Implementing Decoder only Model (GPT style) from scratch with PyTorch

Pretraining a LLM model for Text generation, used Salesforce/wikitext for training. The model was trained for 30000 iterations with a batch size of 8 for ~2.5 hours on Tesla P100 (Kaggle Free gpu support). The training loss is around 3.5. Used adam optimizer with a learning rate of 5e-4. After training, the model is producing little reasonable english, can be trained for more time with bigger n_embd and block size for better generation.

Downloads last month
33
Edit dataset card