TheBloke commited on
Commit
799488f
1 Parent(s): ed66643

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -5,6 +5,15 @@ license: other
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki 1.1 L2 7B
7
  model_type: llama
 
 
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  tags:
10
  - llama2
@@ -42,6 +51,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
42
  <!-- repositories-available start -->
43
  ## Repositories available
44
 
 
45
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GPTQ)
46
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GGUF)
47
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-1.1-l2-7b)
 
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki 1.1 L2 7B
7
  model_type: llama
8
+ prompt_template: '### Instruction:
9
+
10
+
11
+ {prompt}
12
+
13
+
14
+ ### Response:
15
+
16
+ '
17
  quantized_by: TheBloke
18
  tags:
19
  - llama2
 
51
  <!-- repositories-available start -->
52
  ## Repositories available
53
 
54
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-AWQ)
55
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GPTQ)
56
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-1.1-L2-7B-GGUF)
57
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-1.1-l2-7b)