Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,17 @@ I am working on improving the model´s capabilities and will update the model if
|
|
17 |
|
18 |
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML).
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Results
|
21 |
|
22 |
I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations.
|
|
|
17 |
|
18 |
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML).
|
19 |
|
20 |
+
## Prompt Template
|
21 |
+
|
22 |
+
```
|
23 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
24 |
+
|
25 |
+
USER: Hello!
|
26 |
+
ASSISTANT: Hello!</s>
|
27 |
+
USER: How are you?
|
28 |
+
ASSISTANT: I am good.</s>
|
29 |
+
```
|
30 |
+
|
31 |
## Results
|
32 |
|
33 |
I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations.
|