Muhammadreza
commited on
Commit
•
81906ca
1
Parent(s):
6509aee
Update README.md
Browse files
README.md
CHANGED
@@ -28,8 +28,26 @@ Maral is the Persian name of [Red Deer](https://en.wikipedia.org/wiki/Red_deer),
|
|
28 |
|
29 |
### Prompt Format
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
### 4 bit Quantization
|
32 |
|
|
|
|
|
33 |
### Inference on a big GPU
|
34 |
|
35 |
### Inference on a small GPU (Consumer Hardware/Free Colab)
|
|
|
28 |
|
29 |
### Prompt Format
|
30 |
|
31 |
+
This model requires _Guanaco_ format, which is like this:
|
32 |
+
|
33 |
+
```
|
34 |
+
### Human: <prompt>
|
35 |
+
### Assistant: <answer>
|
36 |
+
```
|
37 |
+
|
38 |
+
So in your code, you may write prompts like this:
|
39 |
+
|
40 |
+
```python
|
41 |
+
prompt = "در سال ۱۹۹۶ چه کسی رییس جمهور آمریکا بود؟"
|
42 |
+
prompt = f"### Human:{prompt}\n### Assistant:"
|
43 |
+
```
|
44 |
+
|
45 |
+
More information about this on the inference sections.
|
46 |
+
|
47 |
### 4 bit Quantization
|
48 |
|
49 |
+
If you want to use 4 bit quantization, we have a PEFT for you [here](https://huggingface.co/MaralGPT/MaralGPT-Mistral-7B-v-0-1). Also, you can find _Google Colab_ notebooks [here](https://github.com/prp-e/maralgpt).
|
50 |
+
|
51 |
### Inference on a big GPU
|
52 |
|
53 |
### Inference on a small GPU (Consumer Hardware/Free Colab)
|