Questions about whether llama3 base models have been trained on openwebtext
#2
by
shijz
- opened
Dear author of this dataset,
Thank you for providing such useful dataset for us, so we do not need to tokenize openwebtext with llama3 tokenizer ourself.
I wonder whether you think llama3 (and llama3.1) base models have been traind on openwebtext datasets? Can we use openwebtext to evaluate its performance?
I don't think Llama has published information on their training data. Likely Llama is trained on a private dataset controlled by Meta. I don't have any information as to whether that dataset includes OpenWebText.