|
--- |
|
language: |
|
- en |
|
- fa |
|
--- |
|
|
|
<p align="center"> |
|
<picture> |
|
<img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/VN4F7WRC/Untitled-design-modified.png" width="1000" height="450" style="max-width: 100%;"> |
|
</picture> |
|
<br/> |
|
<br/> |
|
</p> |
|
|
|
<h4 align="center"> |
|
<p> |
|
<b>English</b> | |
|
<a href="https://huggingface.co/aidal/Persian-Mistral-7B#model-description-1">简体中文</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">Русский</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">Рortuguês</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">తెలుగు</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Français</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> | |
|
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiếng Việt</a> | |
|
</p> |
|
</h4> |
|
## [Model description](#model-description)|[Example output](#example-output)|[Banchmark results](#banchmark-results)|[How to use](#how-to-use)|[Training and finetuning](#training-and-finetuning) | |
|
|
|
---- |
|
# Model description |
|
---- |
|
# Example output: |
|
|
|
**Example 1:** |
|
- Input: "سلام، خوبی؟" |
|
- Output: "سلام، خوشحالم که با شما صحبت می کنم. چطور می توانم به شما کمک کنم؟" |
|
|
|
**Example 2:** |
|
- Input: "سلام، خوبی؟" |
|
- Output: "سلام، خوشحالم که با شما صحبت می کنم. چطور می توانم به شما کمک کنم؟" |
|
---- |
|
# Banchmark results |
|
|
|
| model | dataset | max_token | prompt | score | |
|
|---------------|------------------|-----------|--------|---------| |
|
| base-model-7b | ARC-easy-dev | 2 | en-1 | 0.41929 | |
|
| base-model-7b | ARC-easy-dev | 80 | en-2 | 0.39122 | |
|
| base-model-7b | ARC-easy-dev | 300 | en-1 | 0.34448 | |
|
| model | dataset | max_token | prompt | score | |
|
|---------------|------------------|-----------|--------|---------| |
|
| fa-model-7b | ARC-easy-dev | 80 | en-1 | 0.37894 | |
|
| fa-model-7b | ARC-easy-dev | 80 | en-2 | 0.33333 | |
|
| fa-model-7b | ARC-easy-dev | 80 | fa-2 | 0.28771 | |
|
| fa-model-7b | ARC-easy-dev | 300 | fa-1 | 0.25752 | |
|
| fa-model-7b | ARC-easy-dev | 2 | fa-1 | 0.24035 | |
|
|
|
<br> |
|
<br> |
|
<br> |
|
|
|
| model | dataset | max_token | prompt | score | |
|
|---------------|--------------------|-----------|--------|---------| |
|
| base-model-7b | ARC-challenge-dev | 80 | en-2 | 0.37123 | |
|
| base-model-7b | ARC-challenge-dev | 2 | en-2 | 0.36789 | |
|
| base-model-7b | ARC-challenge-dev | 2 | en-1 | 0.35451 | |
|
| base-model-7b | ARC-challenge-dev | 80 | en-1 | 0.33779 | |
|
| model | dataset | max_token | prompt | score | |
|
|---------------|--------------------|-----------|--------|---------| |
|
| fa-model-7b | ARC-challenge-dev | 2 | en-1 | 0.39298 | |
|
| fa-model-7b | ARC-challenge-dev | 80 | en-1 | 0.38421 | |
|
| fa-model-7b | ARC-challenge-dev | 2 | en-2 | 0.31929 | |
|
| fa-model-7b | ARC-challenge-dev | 80 | en-2 | 0.31754 | |
|
|
|
---- |
|
# How to use |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("aidal/Persian-Mistral-7B") |
|
model = AutoModelForCausalLM.from_pretrained("aidal/Persian-Mistral-7B") |
|
input_text = "پایتخت ایران کجاست؟" |
|
input_ids = tokenizer(input_text, return_tensors="pt") |
|
outputs = model.generate(**input_ids) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
---- |
|
# Training and finetuning |