ai-forever
commited on
Commit
·
85856b0
1
Parent(s):
6238e57
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Trainin
|
|
17 |
|
18 |
Here is the table with number of tokens for each language in the pretraining corpus on a logarithmic scale:
|
19 |
|
20 |
-
|
21 |
|
22 |
## Languages
|
23 |
|
|
|
17 |
|
18 |
Here is the table with number of tokens for each language in the pretraining corpus on a logarithmic scale:
|
19 |
|
20 |
+
![](https://i.imgur.com/KSMfVX1.png)
|
21 |
|
22 |
## Languages
|
23 |
|