|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- kk |
|
- en |
|
- ru |
|
- tr |
|
library_name: transformers |
|
|
|
extra_gated_prompt: "By accessing this model, you are agreeing to the LLama 3.1 terms and cc-by-nc license for non commnercial use" |
|
extra_gated_fields: |
|
Company: text |
|
Country: country |
|
I want to use this model for: |
|
type: select |
|
options: |
|
- Research |
|
- Education |
|
- label: Other |
|
value: other |
|
I agree to use this model for non-commercial use ONLY: checkbox |
|
--- |
|
|
|
# Model Overview |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/jYgsTXK3B3OTrZNBN6z2a.png) |
|
|
|
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/JUPiPqstRai87z1rLuI67.png) --> |
|
Made in Kazakhstan - Қазақстанда жасалған |
|
|
|
## Description: |
|
|
|
LLama-3.1-KazLLM-1.0-70B is a large language model customized by ISSAI to improve the helpfulness of LLM generated responses in the Kazakh language. |
|
|
|
|
|
|
|
## Terms of use |
|
|
|
By accessing this model, you are agreeing to the LLama 3.1 terms and conditions of the following: |
|
|
|
- [License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) |
|
- [Acceptable Use Policy](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/USE_POLICY.md) |
|
- [Meta’s Privacy Policy](https://www.facebook.com/privacy/policy/) |
|
|
|
Additionally, this model is licensed under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. |
|
|
|
## Evaluation Metrics |
|
|
|
Model evaluations were conducted using established benchmarks, employing a systematic process to test performance across various cognitive and technical tasks. |
|
|
|
To benchmark your own model and learn about the evaluation conditions for the results below, refer to the[IS2AI/KazLLM_Benchmark Repo](https://github.com/IS2AI/KazLLM_Benchmark). |
|
|
|
### English Leaderboard |
|
|
|
| Model | Type | Average | MMLU_en | Winogrande_en | Hellaswag_en | ARC_en | GSM8k_en | DROP_en | |
|
|----------------------------|--------------|---------|---------|---------------|--------------|--------|----------|---------| |
|
| GPT-4o | Closed | 85.66 | 83.2 | 72.04 | 100 | 94.7 | 93.03 | 71 | |
|
| Llama-3.1-70b-instruct | Open-source | 85.59 | 76.58 | 81.1 | 88.46 | 95.77 | 90.3 | 81.32 | |
|
| **ISSAI KazLLM-1.0-70B** | Open-source | 81.6 | 67.49 | 82.51 | 92.49 | 91.98 | 81.65 | 73.45 | |
|
| ISSAI KazLLM-1.0-8B | Open-source | 76.4 | 64.71 | 73.97 | 84.1 | 90.78 | 71.95 | 72.91 | |
|
| Llama-3.1-8b-instruct | Open-source | 73.4 | 65.67 | 67.86 | 74.06 | 89.23 | 73.99 | 69.57 | |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/q037sSa3Fljr1yXwJvnMQ.png) |
|
|
|
--- |
|
|
|
### Kazakh Leaderboard |
|
|
|
| Model | Type | Average | MMLU_kk | Winogrande_kk | Hellaswag_kk | ARC_kk | GSM8k_kk | DROP_kk | |
|
|----------------------------|--------------|---------|---------|---------------|--------------|--------|----------|---------| |
|
| GPT-4o | Closed | 75.95 | 71.2 | 62.76 | 83.26 | 90.67 | 85.82 | 62 | |
|
| **ISSAI KazLLM-1.0-70B** | Open-source | 74.26 | 64.26 | 73.57 | 81.52 | 88.58 | 76.35 | 61.27 | |
|
| Llama-3.1-70b-instruct | Open-source | 64.19 | 60.95 | 60.84 | 50.93 | 82.78 | 78.47 | 51.18 | |
|
| ISSAI KazLLM-1.0-8B | Open-source | 56.85 | 37.39 | 63.61 | 57.64 | 73.51 | 57.01 | 51.94 | |
|
| Llama-3.1-8b-instruct | Open-source | 44.84 | 41.08 | 50.37 | 33.24 | 57.44 | 48.98 | 37.93 | |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/3X0DqoPAcP5jUGoAqHSg3.png) |
|
|
|
--- |
|
|
|
### Russian Leaderboard |
|
|
|
| Model | Type | Average | MMLU_ru | Winogrande_ru | Hellaswag_ru | ARC_ru | GSM8k_ru | DROP_ru | |
|
|----------------------------|--------------|---------|---------|---------------|--------------|--------|----------|---------| |
|
| **ISSAI KazLLM-1.0-70B** | Open-source | 72.99 | 39.86 | 75.72 | 86.67 | 95.41 | 78.47 | 61.79 | |
|
| GPT-4o | Closed | 72.83 | 40.45 | 65.14 | 86.76 | 93.29 | 86.35 | 65 | |
|
| Llama-3.1-70b-instruct | Open-source | 69.97 | 38.69 | 63.67 | 73.86 | 92.98 | 87.49 | 63.13 | |
|
| ISSAI KazLLM-1.0-8B | Open-source | 61.4 | 32.98 | 60.22 | 69.35 | 85.6 | 66.26 | 53.98 | |
|
| Llama-3.1-8b-instruct | Open-source | 55.64 | 33.23 | 47.14 | 52.13 | 82.13 | 69.07 | 50.15 | |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/quEZccDuxyoxKU-7ojU7Q.png) |
|
|
|
<!-- <table> |
|
<tr> |
|
<td><strong>Benchmark</strong></td> |
|
<td style="text-align: center"><strong>Llama-3.1-8B</strong></td> |
|
<td style="text-align: center"><strong>issai/LLama-3.1-KazLLM-1.0-70B</strong></td> |
|
</tr> |
|
<tr> |
|
<td>ARC-C (25-shot)</td> |
|
<td style="text-align: center">58.2</td> |
|
<td style="text-align: center">59.4</td> |
|
</tr> |
|
<tr> |
|
<td>MMLU (5-shot)</td> |
|
<td style="text-align: center">65.4</td> |
|
<td style="text-align: center">60.6</td> |
|
</tr> |
|
<tr> |
|
<td>HellaSwag (10-shot)</td> |
|
<td style="text-align: center">82.3</td> |
|
<td style="text-align: center">79.8</td> |
|
</tr> |
|
<tr> |
|
<td>WinoGrande (5-shot)</td> |
|
<td style="text-align: center">78.3</td> |
|
<td style="text-align: center">75.9</td> |
|
</tr> |
|
<tr> |
|
<td>GSM8K (5-shot)</td> |
|
<td style="text-align: center">50.7</td> |
|
<td style="text-align: center">56.3</td> |
|
</tr> |
|
<tr> |
|
<td>TruthfulQA (0-shot)</td> |
|
<td style="text-align: center">44.2</td> |
|
<td style="text-align: center">40.9</td> |
|
</tr> |
|
<tr> |
|
<td><strong>Average Score</strong></td> |
|
<td style="text-align: center"><strong>63.19</strong></td> |
|
<td style="text-align: center"><strong>62.16</strong></td> |
|
</tr> |
|
<tr> |
|
<td><strong>Accuracy Recovery (%)</strong></td> |
|
<td style="text-align: center"><strong>100</strong></td> |
|
<td style="text-align: center"><strong>98.37</strong></td> |
|
</tr> |
|
</table> --> |
|
|
|
## Usage: |
|
|
|
You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download. |
|
|
|
This code has been tested on Transformers v4.45.1, torch v2.3.1 and 2 H100 GPUs, but any setup that supports meta-llama/Llama-3.1-70B-Instruct should support this model as well. If you run into problems, you can consider doing: |
|
```bash |
|
pip install -U transformers. |
|
``` |
|
|
|
```python |
|
# Use a pipeline as a high-level helper |
|
from transformers import pipeline |
|
|
|
messages = [ |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
pipe = pipeline("text-generation", model="issai/LLama-3.1-KazLLM-1.0-70B") |
|
pipe(messages) |
|
``` |
|
|
|
```python |
|
# Load model directly |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("issai/LLama-3.1-KazLLM-1.0-70B") |
|
model = AutoModelForCausalLM.from_pretrained("issai/LLama-3.1-KazLLM-1.0-70B") |
|
``` |
|
|
|
## Input: |
|
**Input Type(s):** Text <br> |
|
**Input Format:** String <br> |
|
**Input Parameters:** One Dimensional (1D) <br> |
|
**Other Properties Related to Input:** Max of 128k tokens<br> |
|
|
|
## Output: |
|
**Output Type(s):** Text <br> |
|
**Output Format:** String <br> |
|
**Output Parameters:** One Dimensional (1D) <br> |
|
**Other Properties Related to Output:** Max of 4k tokens <br> |
|
|
|
|
|
## Model Version: |
|
v1.0 |
|
|
|
|
|
|
|
## Ethical and Legal Considerations: |
|
The models provided in this repository, including ISSAI KAZ-LLM, are powerful tools designed to advance research and innovation. However, it is essential to use these models responsibly, ethically, and in accordance with applicable laws and regulations. |
|
|
|
### Key Guidelines for Responsible Use: |
|
|
|
1. **Bias and Fairness:** |
|
While the models are designed to reflect linguistic and cultural diversity, they may still exhibit biases. Please ensure that the outputs are evaluated critically and not used to perpetuate harmful stereotypes or unfair practices. |
|
|
|
2. **Content Generation:** |
|
Generated content should not be used to produce harmful, misleading, or deceptive information. Users should take extra care in ensuring the authenticity and reliability of the output in all contexts. |
|
|
|
3. **Privacy and Data Protection:** |
|
Ensure that any personal data input into the models complies with privacy laws and regulations. Do not use the models to generate or process sensitive personal information unless proper safeguards are in place. |
|
|
|
4. **Ethical Considerations:** |
|
The models should not be used to create content that promotes violence, hatred, discrimination, or illegal activities. Always adhere to ethical standards and foster positive impact through AI technologies. |
|
|
|
5. **Accountability:** |
|
The responsibility for the use of the models lies with the users. We encourage you to evaluate the generated content critically and consider the potential social, cultural, and ethical consequences of its use. |
|
|
|
By accessing or using these models, you agree to follow these guidelines and contribute to the responsible development and application of AI technologies. |
|
|
|
For any questions or concerns, please contact us at **issai@nu.edu.kz**. |
|
|
|
<!-- ## Citation |
|
|
|
If you find this model useful, please cite the following works |
|
|
|
```bibtex |
|
|
|
``` --> |