rassulya's picture
add logo
91998aa verified
metadata
license: cc-by-nc-4.0
language:
  - kk
  - en
  - ru
  - tr
library_name: transformers
extra_gated_prompt: >-
  By accessing this model, you are agreeing to the LLama 3.1 terms and cc-by-nc
  license for non commnercial use
extra_gated_fields:
  Company: text
  Country: country
  I want to use this model for:
    type: select
    options:
      - Research
      - Education
      - label: Other
        value: other
  I agree to use this model for non-commercial use ONLY: checkbox

Model Overview

image/png

Made in Kazakhstan - Қазақстанда жасалған

Description:

LLama-3.1-KazLLM-1.0-70B is a large language model customized by ISSAI to improve the helpfulness of LLM generated responses in the Kazakh language.

Terms of use

By accessing this model, you are agreeing to the LLama 3.1 terms and conditions of the following:

Additionally, this model is licensed under the CC BY-NC 4.0 license.

Evaluation Metrics

Model evaluations were conducted using established benchmarks, employing a systematic process to test performance across various cognitive and technical tasks.

To benchmark your own model and learn about the evaluation conditions for the results below, refer to theIS2AI/KazLLM_Benchmark Repo.

English Leaderboard

Model Type Average MMLU_en Winogrande_en Hellaswag_en ARC_en GSM8k_en DROP_en
GPT-4o Closed 85.66 83.2 72.04 100 94.7 93.03 71
Llama-3.1-70b-instruct Open-source 85.59 76.58 81.1 88.46 95.77 90.3 81.32
ISSAI KazLLM-1.0-70B Open-source 81.6 67.49 82.51 92.49 91.98 81.65 73.45
ISSAI KazLLM-1.0-8B Open-source 76.4 64.71 73.97 84.1 90.78 71.95 72.91
Llama-3.1-8b-instruct Open-source 73.4 65.67 67.86 74.06 89.23 73.99 69.57

image/png


Kazakh Leaderboard

Model Type Average MMLU_kk Winogrande_kk Hellaswag_kk ARC_kk GSM8k_kk DROP_kk
GPT-4o Closed 75.95 71.2 62.76 83.26 90.67 85.82 62
ISSAI KazLLM-1.0-70B Open-source 74.26 64.26 73.57 81.52 88.58 76.35 61.27
Llama-3.1-70b-instruct Open-source 64.19 60.95 60.84 50.93 82.78 78.47 51.18
ISSAI KazLLM-1.0-8B Open-source 56.85 37.39 63.61 57.64 73.51 57.01 51.94
Llama-3.1-8b-instruct Open-source 44.84 41.08 50.37 33.24 57.44 48.98 37.93

image/png


Russian Leaderboard

Model Type Average MMLU_ru Winogrande_ru Hellaswag_ru ARC_ru GSM8k_ru DROP_ru
ISSAI KazLLM-1.0-70B Open-source 72.99 39.86 75.72 86.67 95.41 78.47 61.79
GPT-4o Closed 72.83 40.45 65.14 86.76 93.29 86.35 65
Llama-3.1-70b-instruct Open-source 69.97 38.69 63.67 73.86 92.98 87.49 63.13
ISSAI KazLLM-1.0-8B Open-source 61.4 32.98 60.22 69.35 85.6 66.26 53.98
Llama-3.1-8b-instruct Open-source 55.64 33.23 47.14 52.13 82.13 69.07 50.15

image/png

Usage:

You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download.

This code has been tested on Transformers v4.45.1, torch v2.3.1 and 2 H100 GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing:

pip install -U transformers.
# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="issai/LLama-3.1-KazLLM-1.0-70B")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("issai/LLama-3.1-KazLLM-1.0-70B")
model = AutoModelForCausalLM.from_pretrained("issai/LLama-3.1-KazLLM-1.0-70B")

Input:

Input Type(s): Text
Input Format: String
Input Parameters: One Dimensional (1D)
Other Properties Related to Input: Max of 128k tokens

Output:

Output Type(s): Text
Output Format: String
Output Parameters: One Dimensional (1D)
Other Properties Related to Output: Max of 4k tokens

Model Version:

v1.0

Ethical and Legal Considerations:

The models provided in this repository, including ISSAI KAZ-LLM, are powerful tools designed to advance research and innovation. However, it is essential to use these models responsibly, ethically, and in accordance with applicable laws and regulations.

Key Guidelines for Responsible Use:

  1. Bias and Fairness: While the models are designed to reflect linguistic and cultural diversity, they may still exhibit biases. Please ensure that the outputs are evaluated critically and not used to perpetuate harmful stereotypes or unfair practices.

  2. Content Generation: Generated content should not be used to produce harmful, misleading, or deceptive information. Users should take extra care in ensuring the authenticity and reliability of the output in all contexts.

  3. Privacy and Data Protection: Ensure that any personal data input into the models complies with privacy laws and regulations. Do not use the models to generate or process sensitive personal information unless proper safeguards are in place.

  4. Ethical Considerations: The models should not be used to create content that promotes violence, hatred, discrimination, or illegal activities. Always adhere to ethical standards and foster positive impact through AI technologies.

  5. Accountability: The responsibility for the use of the models lies with the users. We encourage you to evaluate the generated content critically and consider the potential social, cultural, and ethical consequences of its use.

By accessing or using these models, you agree to follow these guidelines and contribute to the responsible development and application of AI technologies.

For any questions or concerns, please contact us at issai@nu.edu.kz.