TheBloke commited on
Commit
9dc89e5
1 Parent(s): 142e28c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -9,6 +9,19 @@ model_creator: Riiid
9
  model_name: Sheep Duck Llama 2
10
  model_type: llama
11
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  quantized_by: TheBloke
13
  tags:
14
  - Riiid
@@ -64,6 +77,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
64
  <!-- repositories-available start -->
65
  ## Repositories available
66
 
 
67
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GPTQ)
68
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF)
69
  * [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2)
 
9
  model_name: Sheep Duck Llama 2
10
  model_type: llama
11
  pipeline_tag: text-generation
12
+ prompt_template: '### System:
13
+
14
+ {system_message}
15
+
16
+
17
+ ### User:
18
+
19
+ {prompt}
20
+
21
+
22
+ ### Assistant:
23
+
24
+ '
25
  quantized_by: TheBloke
26
  tags:
27
  - Riiid
 
77
  <!-- repositories-available start -->
78
  ## Repositories available
79
 
80
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-AWQ)
81
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GPTQ)
82
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sheep-Duck-Llama-2-70B-GGUF)
83
  * [Riiid's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Riiid/sheep-duck-llama-2)