![](https://cdn-avatars.huggingface.co/v1/production/uploads/6337537b267cee4d068f604d/L39anFZsbOCVVSkESTUqZ.png)
hazyresearch/based-360m
Updated
•
186
•
2
These language model checkpoints are trained at the 360M and 1.3Bn parameter scales for up to 50Bn tokens on the Pile corpus, for research purposes.