File size: 3,229 Bytes
bb25c3b
 
 
 
 
 
 
 
 
 
 
f58fbed
bb25c3b
 
 
bfb5d79
 
 
 
 
 
e99fe07
bb25c3b
e99fe07
bb25c3b
e99fe07
 
 
 
 
 
 
bb25c3b
e99fe07
bb25c3b
08e486e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bb25c3b
e99fe07
bb25c3b
e99fe07
bb25c3b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- xal
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# ๐ŸŒธ Kalmyk mGPT 1.3B

Language model for Kalmyk. Model has 1.3B parameters as you can guess from it's name.

Kalmyk belongs to Mongolic language family. It's a very rare language with approximately 0.2 million speakers. Here are some facts about it:

1. It is spoken by the Kalmyk people in Russia, predominantly in the Republic of Kalmykia.
2. Kalmykia is the only region in Europe where Buddhism is the most practiced religion.
3. Kalmyk has been influenced by the Russian language.

## Technical details

It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.

We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Kalmyk mGPT 1.3B** was trained for another 200 steps with batch_size=4 and context window of **2048** tokens on 1 A100.

Final perplexity for this model on validation is **13.97**.

_Chart of the training loss and perplexity:_

![](https://i.imgur.com/wkPdvXP.png)

## Other mGPT-1.3B models

- [๐Ÿ‡ฆ๐Ÿ‡ฒ mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [๐Ÿ‡ฆ๐Ÿ‡ฟ mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [๐Ÿฏ mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [๐Ÿ‡ง๐Ÿ‡พ mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [๐Ÿ‡ง๐Ÿ‡ฌ mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [๐ŸŒž mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [๐ŸŒณ mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [๐Ÿ‡ฌ๐Ÿ‡ช mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [๐Ÿ‡ฐ๐Ÿ‡ฟ mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
- [๐Ÿ‡ฐ๐Ÿ‡ฌ mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [๐Ÿป mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [๐Ÿ‡ฒ๐Ÿ‡ณ mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [๐Ÿ† mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [๐Ÿ‡ฎ๐Ÿ‡ท mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [๐Ÿ‡ท๐Ÿ‡ด mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
- [๐Ÿ‡น๐Ÿ‡ฏ mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
- [โ˜• mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
- [๐Ÿ‡น๐Ÿ‡ฒ mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [๐ŸŽ mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [๐Ÿ‡บ๐Ÿ‡ฆ mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
- [๐Ÿ‡บ๐Ÿ‡ฟ mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [๐Ÿ’Ž mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)

## Feedback

If you'll found a bug of have additional data to train model on your language โ€” please, give us feedback.

Model will be improved over time. Stay tuned!