rugpt3small_based_on_gpt2
The model architecture design, pretraining, and evaluation are documented in our preprint: A Family of Pretrained Transformer Language Models for Russian.
The model was pretrained with sequence length 1024 using transformers by the SberDevices team on 80B tokens around 3 epochs. After that, the model was finetuned with the context size of 2048.
Total training time took around one week on 32 GPUs.
Authors
- NLP core team RnD Telegram channel:
- Dmitry Zmitrovich
Cite us
@misc{zmitrovich2023family,
title={A Family of Pretrained Transformer Language Models for Russian},
author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
year={2023},
eprint={2309.10931},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 19,880
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.