Update README.md
Browse files
README.md
CHANGED
@@ -202,6 +202,34 @@ merge_method: passthrough
|
|
202 |
dtype: float32
|
203 |
</PRE>
|
204 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
205 |
<h3>EXAMPLES:</h3>
|
206 |
|
207 |
Example are created using "temp=0", minimal parameters and no chat/prompt template. Below are the least creative outputs.
|
|
|
202 |
dtype: float32
|
203 |
</PRE>
|
204 |
|
205 |
+
|
206 |
+
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
|
207 |
+
|
208 |
+
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
|
209 |
+
|
210 |
+
Set the "Smoothing_factor" to 1.5 to 2.5
|
211 |
+
|
212 |
+
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
|
213 |
+
|
214 |
+
: in text-generation-webui -> parameters -> lower right.
|
215 |
+
|
216 |
+
: In Silly Tavern this is called: "Smoothing"
|
217 |
+
|
218 |
+
|
219 |
+
NOTE: For "text-generation-webui"
|
220 |
+
|
221 |
+
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
|
222 |
+
|
223 |
+
Source versions (and config files) of my models are here:
|
224 |
+
|
225 |
+
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
|
226 |
+
|
227 |
+
OTHER OPTIONS:
|
228 |
+
|
229 |
+
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
|
230 |
+
|
231 |
+
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
232 |
+
|
233 |
<h3>EXAMPLES:</h3>
|
234 |
|
235 |
Example are created using "temp=0", minimal parameters and no chat/prompt template. Below are the least creative outputs.
|