Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,13 @@ This model was fine-tuned for instruction following. Instruction-tuned models ar
|
|
39 |
|
40 |
### How to use
|
41 |
|
42 |
-
If you want to use this model for instruction-following, you need to use the same prompt format we used in the fine-tuning process (basically the same format what Meta used in their Llama2 models)
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
```python
|
45 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
39 |
|
40 |
### How to use
|
41 |
|
42 |
+
If you want to use this model for instruction-following, you need to use the same prompt format we used in the fine-tuning process (basically the same format what Meta used in their Llama2 models).\
|
43 |
+
**Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.**
|
44 |
+
|
45 |
+
Looking for <b>GGUF-versions?</b>
|
46 |
+
Those can be found from here for now: [GGUF-versions](https://huggingface.co/mradermacher/Ahma-3B-Instruct-GGUF)
|
47 |
+
|
48 |
+
Here is an example using the instruction-following prompt format with the tokenizer's built-in chat template feature which makes it easy to format your potential multi-turn chats too, with some generation arguments you can modify for your use:
|
49 |
|
50 |
```python
|
51 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|