Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ tags:
|
|
18 |
|
19 |
Legal: This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
|
20 |
|
21 |
-
**Everything COT** is an investigative general model that uses Chain of Thought for everything. And I mean everything.
|
22 |
|
23 |
Instead of confidently proclaiming something (or confidently hallucinating other things) like most models, it caries an internal dialogue with itself and often cast doubts over uncertain topics while looking at it from various sides.
|
24 |
|
@@ -41,6 +41,8 @@ The correct jinja chat_template is in tokenizer_config.json
|
|
41 |
|
42 |
**Parameters**
|
43 |
|
44 |
-
It's up to you to discover the best parameters that works.
|
45 |
|
46 |
-
|
|
|
|
|
|
18 |
|
19 |
Legal: This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
|
20 |
|
21 |
+
**Everything COT** is an investigative self-reflecting general model that uses Chain of Thought for everything. And I mean everything.
|
22 |
|
23 |
Instead of confidently proclaiming something (or confidently hallucinating other things) like most models, it caries an internal dialogue with itself and often cast doubts over uncertain topics while looking at it from various sides.
|
24 |
|
|
|
41 |
|
42 |
**Parameters**
|
43 |
|
44 |
+
It's up to you to discover the best parameters that works.
|
45 |
|
46 |
+
I tested it in oobabooga WebUi using very off-the-shelf min_p preset: Temperature: 1, Top_p: 1, Top_k: 0, Typical_p: 1, min_p: 0.05, repetition_penalty: 1
|
47 |
+
|
48 |
+
Different parameters, like temperature will affect the models talkativnes and self-reflecting properties. If you find something really good, let me know and I'll post it here.
|