RichardErkhov commited on
Commit
65d3432
1 Parent(s): 056cb22

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ kanarya-2b - bnb 4bits
11
+ - Model creator: https://huggingface.co/asafaya/
12
+ - Original model: https://huggingface.co/asafaya/kanarya-2b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ datasets:
21
+ - oscar
22
+ - mc4
23
+ language:
24
+ - tr
25
+ pipeline_tag: text-generation
26
+ widget:
27
+ - text: "Benim adım Zeynep, ve en sevdiğim kitabın adı:"
28
+ example_title: "Benim adım Zeynep, ve en sevdiğim kitabın adı"
29
+ - text: "Bugünkü yemeğimiz"
30
+ example_title: "Bugünkü yemeğimiz"
31
+ ---
32
+
33
+ # Kanarya-2B: Turkish Language Model
34
+
35
+ <img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/>
36
+
37
+ **Kanarya** is a pre-trained Turkish GPT-J 2B model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture.
38
+
39
+ ## Model Details
40
+
41
+ - Model Name: Kanarya-2B
42
+ - Model Size: 2,050M parameters
43
+ - Training Data: OSCAR, mC4
44
+ - Language: Turkish
45
+ - Layers: 24
46
+ - Hidden Size: 2560
47
+ - Number of Heads: 20
48
+ - Context Size: 2048
49
+ - Positional Embeddings: Rotary
50
+ - Vocabulary Size: 32,768
51
+
52
+ ## Intended Use
53
+
54
+ This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning.
55
+
56
+ ## Limitations and Ethical Considerations
57
+
58
+ The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues.
59
+
60
+ ## License: Apache 2.0
61
+
62
+ The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind.
63
+
64
+ ## Citation
65
+
66
+ If you use the model, please cite the following paper:
67
+
68
+ ```bibtex
69
+ @inproceedings{safaya-etal-2022-mukayese,
70
+ title = "Mukayese: {T}urkish {NLP} Strikes Back",
71
+ author = "Safaya, Ali and
72
+ Kurtulu{\c{s}}, Emirhan and
73
+ Goktogan, Arda and
74
+ Yuret, Deniz",
75
+ editor = "Muresan, Smaranda and
76
+ Nakov, Preslav and
77
+ Villavicencio, Aline",
78
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
79
+ month = may,
80
+ year = "2022",
81
+ address = "Dublin, Ireland",
82
+ publisher = "Association for Computational Linguistics",
83
+ url = "https://aclanthology.org/2022.findings-acl.69",
84
+ doi = "10.18653/v1/2022.findings-acl.69",
85
+ pages = "846--863",
86
+ }
87
+ ```
88
+
89
+ ## Acknowledgments
90
+
91
+ During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
92
+
93
+