fuzzy-mittenz
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,42 @@ tags:
|
|
27 |
|
28 |
![dolphin qstar.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/DNSYk_khRXKAOjZjb2Kjy.png)
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) using llama.cpp
|
31 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) for more details on the model.
|
32 |
|
|
|
27 |
|
28 |
![dolphin qstar.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/DNSYk_khRXKAOjZjb2Kjy.png)
|
29 |
|
30 |
+
|
31 |
+
## GPT4ALL Chat Template
|
32 |
+
```
|
33 |
+
{{- '<|im_start|>system\n' }}
|
34 |
+
{% if toolList|length > 0 %}You have access to the following functions:
|
35 |
+
{% for tool in toolList %}
|
36 |
+
Use the function '{{tool.function}}' to: '{{tool.description}}'
|
37 |
+
{% if tool.parameters|length > 0 %}
|
38 |
+
parameters:
|
39 |
+
{% for info in tool.parameters %}
|
40 |
+
{{info.name}}:
|
41 |
+
type: {{info.type}}
|
42 |
+
description: {{info.description}}
|
43 |
+
required: {{info.required}}
|
44 |
+
{% endfor %}
|
45 |
+
{% endif %}
|
46 |
+
# Tool Instructions
|
47 |
+
If you CHOOSE to call this function ONLY reply with the following format:
|
48 |
+
'{{tool.symbolicFormat}}'
|
49 |
+
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
|
50 |
+
'{{tool.exampleCall}}'
|
51 |
+
After the result you might reply with, '{{tool.exampleReply}}'
|
52 |
+
{% endfor %}
|
53 |
+
You MUST include both the start and end tags when you use a function.
|
54 |
+
|
55 |
+
You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible.
|
56 |
+
{% endif %}
|
57 |
+
{{- '<|im_end|>\n' }}
|
58 |
+
{% for message in messages %}
|
59 |
+
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
|
60 |
+
{% endfor %}
|
61 |
+
{% if add_generation_prompt %}
|
62 |
+
{{ '<|im_start|>assistant\n' }}
|
63 |
+
{% endif %}
|
64 |
+
```
|
65 |
+
|
66 |
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) using llama.cpp
|
67 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-Qwen2.5-1.5B) for more details on the model.
|
68 |
|