Pretrained on roughly 1.6B (mostly Turkish) tokens from HF and "high quality" scraped data using 1 RTX 3090. The training will continue. The model already can be (sort of) fine-tuned for instruction.
HF kaynaklı ve scrape edilen yaklaşık 1.6 Milyar (çoğunlukla Türkçe) token ile 1 RTX 3090 kullanılarak eğitilmiştir. Model şimdiden talimatlar için fine-tune edilebiliyor:
max_length=256, top_k=20, min_p=0.1, repetition_penalty=1.1, temperature=0.1, seed=22366 / TR_4k_LoRA
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.