FPHam commited on
Commit
7262107
1 Parent(s): 6fc03ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -35,4 +35,12 @@ Instead of confidently proclaiming something (or confidently hallucinating other
35
 
36
 
37
 
38
- **Everything COT uses Llama 3 instruct template**
 
 
 
 
 
 
 
 
 
35
 
36
 
37
 
38
+ **Everything COT uses Llama 3 instruct template**
39
+
40
+ The correct jinja chat_template is in tokenizer_config.json
41
+
42
+ **Parameters**
43
+
44
+ It's up to you to discover the best parameters that works. I tested it in oobabooga WebUi using very off-the-shelf min_p preset: Temperature: 1, Top_p: 1, Top_k: 0, Typiocal_p: 1, min_p: 0.05, repetition_penalty: 1
45
+
46
+ Different parameters, like temperature will affect the models talkatiovnes and investigative properties. If you find something really good, let me know and I'll post it here.