TheBloke commited on
Commit
daf4c8f
1 Parent(s): 74b5597

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -5,6 +5,15 @@ license: other
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki 1.1 L2 7B
7
  model_type: llama
 
 
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  tags:
10
  - llama2
@@ -58,6 +67,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
61
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GPTQ)
62
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GGUF)
63
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-1.1-l2-7b)
 
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki 1.1 L2 7B
7
  model_type: llama
8
+ prompt_template: '### Instruction:
9
+
10
+
11
+ {prompt}
12
+
13
+
14
+ ### Response:
15
+
16
+ '
17
  quantized_by: TheBloke
18
  tags:
19
  - llama2
 
67
  <!-- repositories-available start -->
68
  ## Repositories available
69
 
70
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-AWQ)
71
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GPTQ)
72
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GGUF)
73
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-1.1-l2-7b)