DavidAU commited on
Commit
b33d355
1 Parent(s): da7e991

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -96,6 +96,19 @@ OTHER OPTIONS:
96
 
97
  - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
  <h3> Sample Prompt and Model's Compared:</h3>
100
 
101
  Prompt tested with "temp=0" to ensure compliance, 2048 context (model supports 31768 context / 32k), and "chat" template for LLAMA3.
 
96
 
97
  - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
98
 
99
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
100
+
101
+ This a "Class 1" model:
102
+
103
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
104
+
105
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
106
+
107
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
108
+
109
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
110
+
111
+
112
  <h3> Sample Prompt and Model's Compared:</h3>
113
 
114
  Prompt tested with "temp=0" to ensure compliance, 2048 context (model supports 31768 context / 32k), and "chat" template for LLAMA3.