File size: 4,338 Bytes
6e8f421 f20c925 31fcafb 63ba051 52f346c 5e0c393 be1fbe6 5e0c393 52f346c f20c925 e09c0eb f20c925 52f346c be1fbe6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
language:
- en
- fa
---
<p align="center">
<picture>
<img alt="Hugging Face Transformers Library" src="https://i.postimg.cc/VN4F7WRC/Untitled-design-modified.png" width="1000" height="450" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/aidal/Persian-Mistral-7B#model-description-1">简体中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">Русский</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">Рortuguês</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">తెలుగు</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Français</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiếng Việt</a> |
</p>
</h4>
## [Model description](#model-description)|[Example output](#example-output)|[Banchmark results](#banchmark-results)|[How to use](#how-to-use)|[Training and finetuning](#training-and-finetuning) |
----
# Model description
----
# Example output:
**Example 1:**
- Input: "سلام، خوبی؟"
- Output: "سلام، خوشحالم که با شما صحبت می کنم. چطور می توانم به شما کمک کنم؟"
**Example 2:**
- Input: "سلام، خوبی؟"
- Output: "سلام، خوشحالم که با شما صحبت می کنم. چطور می توانم به شما کمک کنم؟"
----
# Banchmark results
| model | dataset | max_token | prompt | score |
|---------------|------------------|-----------|--------|---------|
| base-model-7b | ARC-easy-dev | 2 | en-1 | 0.41929 |
| base-model-7b | ARC-easy-dev | 80 | en-2 | 0.39122 |
| base-model-7b | ARC-easy-dev | 300 | en-1 | 0.34448 |
| model | dataset | max_token | prompt | score |
|---------------|------------------|-----------|--------|---------|
| fa-model-7b | ARC-easy-dev | 80 | en-1 | 0.37894 |
| fa-model-7b | ARC-easy-dev | 80 | en-2 | 0.33333 |
| fa-model-7b | ARC-easy-dev | 80 | fa-2 | 0.28771 |
| fa-model-7b | ARC-easy-dev | 300 | fa-1 | 0.25752 |
| fa-model-7b | ARC-easy-dev | 2 | fa-1 | 0.24035 |
<br>
<br>
<br>
| model | dataset | max_token | prompt | score |
|---------------|--------------------|-----------|--------|---------|
| base-model-7b | ARC-challenge-dev | 80 | en-2 | 0.37123 |
| base-model-7b | ARC-challenge-dev | 2 | en-2 | 0.36789 |
| base-model-7b | ARC-challenge-dev | 2 | en-1 | 0.35451 |
| base-model-7b | ARC-challenge-dev | 80 | en-1 | 0.33779 |
| model | dataset | max_token | prompt | score |
|---------------|--------------------|-----------|--------|---------|
| fa-model-7b | ARC-challenge-dev | 2 | en-1 | 0.39298 |
| fa-model-7b | ARC-challenge-dev | 80 | en-1 | 0.38421 |
| fa-model-7b | ARC-challenge-dev | 2 | en-2 | 0.31929 |
| fa-model-7b | ARC-challenge-dev | 80 | en-2 | 0.31754 |
----
# How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("aidal/Persian-Mistral-7B")
model = AutoModelForCausalLM.from_pretrained("aidal/Persian-Mistral-7B")
input_text = "پایتخت ایران کجاست؟"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
----
# Training and finetuning |