Model Description
This is the 8B Base (continual pretrained) version of our Uzbek-optimized Llama 8B Instruct model. For instruction following capability, check out our other models:
Our 8B Base model has been continually pretrained with context length of 4096 tokens, on 3.6B tokens (67% English, 33% Uzbek). Our customized tokenizer averages 1.7 tokens per Uzbek word vs. ~3.5 in the original Llama models, meaning 2x faster inference and longer effective context length on Uzbek text.
Methodology: Efficient Vocabulary Adaptation for Uzbek
The primary motivation for our technical approach is to create a model with a more efficient tokenizer for the Uzbek language. This ensures both faster inference speeds and a longer effective context length when processing Uzbek text, as fewer tokens are needed to represent the same amount of information.
Acknowledgements
This project was developed by the teams at Examy.me and Teamwork.uz. Their collaboration and resources were essential to the creation and success of the alloma model series.
- Downloads last month
 - 17
 
Model tree for uzlm/alloma-8B-Base
Base model
meta-llama/Llama-3.1-8B