Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
1fcd92a
1 Parent(s): 3a333bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -42,10 +42,13 @@ These models were quantised using hardware kindly provided by [Latitude.sh](http
42
 
43
  ```
44
  <|user|>
45
- prompt goes here
46
  <|assistant|>
 
47
  ```
48
 
 
 
49
  ## Provided files
50
 
51
  Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
@@ -130,7 +133,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
130
 
131
  prompt = "Tell me about AI"
132
  prompt_template=f'''<|user|>
133
- prompt goes here
134
  <|assistant|>
135
  '''
136
 
 
42
 
43
  ```
44
  <|user|>
45
+ {prompt}
46
  <|assistant|>
47
+
48
  ```
49
 
50
+ Note: it is important to add a line break (`\n`) after the `<|assistant|>` token in the prompt template.
51
+
52
  ## Provided files
53
 
54
  Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
 
133
 
134
  prompt = "Tell me about AI"
135
  prompt_template=f'''<|user|>
136
+ {prompt}
137
  <|assistant|>
138
  '''
139