Edit model card

Kanarya-750M: Turkish Language Model

Kanarya Logo

Kanarya is a pre-trained Turkish GPT-J 750M model. Released as part of Turkish Data Depository efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the GPT-J architecture. The models are only pre-trained and are intended to be fine-tuned on a wide range of Turkish NLP tasks.

Model Details

  • Model Name: Kanarya-750M
  • Model Size: 750M parameters
  • Training Data: OSCAR, mC4
  • Language: Turkish
  • Layers: 12
  • Hidden Size: 2048
  • Number of Heads: 16
  • Context Size: 2048
  • Positional Embeddings: Rotary
  • Vocabulary Size: 32,768

Intended Use

This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning.

Limitations and Ethical Considerations

The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues.

License: Apache 2.0

The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind.

Citation

If you use the model, please cite the following paper:

@inproceedings{safaya-etal-2022-mukayese,
    title = "Mukayese: {T}urkish {NLP} Strikes Back",
    author = "Safaya, Ali  and
      Kurtulu{\c{s}}, Emirhan  and
      Goktogan, Arda  and
      Yuret, Deniz",
    editor = "Muresan, Smaranda  and
      Nakov, Preslav  and
      Villavicencio, Aline",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-acl.69",
    doi = "10.18653/v1/2022.findings-acl.69",
    pages = "846--863",
}

Acknowledgments

During this work, Ali Safaya was supported by KUIS AI Center fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).

Downloads last month
2,747
Safetensors
Model size
738M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train asafaya/kanarya-750m