Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,26 @@ tags:
|
|
12 |
base_model: unsloth/llama-3-8b-bnb-4bit
|
13 |
---
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
base_model: unsloth/llama-3-8b-bnb-4bit
|
13 |
---
|
14 |
|
15 |
+
# Llama 3 finetuned on my TRRR-CoT Dataset
|
16 |
+
**cookinai/TRRR-CoT**
|
17 |
+
|
18 |
+
- This was an attempt at synthetically generating a CoT dataset and then finetuning it on a model to see its reuslts.
|
19 |
+
- From what I notice, when using the correct prompt template the model almost always ues the TRRR format, but I am still awaiting benchmark tests to see if this can improve anything
|
20 |
+
- TRR stand for:
|
21 |
+
|
22 |
+
1. **Think**, about your response
|
23 |
+
2. **Respond**, how you normally would
|
24 |
+
3. **Reflect**, on your response
|
25 |
+
4. **Respond**, again but this time use all the information you have now
|
26 |
+
|
27 |
+
- The mode usually tries to follow this format, it may mix it up a little but usually it almost always reflects in someway. Especially if you tell it to think step by step
|
28 |
+
|
29 |
+
- Intrestingly enough, when finetuned on mistral 7b, I could not get the model CoT at all, with only one epoch llama 3 got it instantly
|
30 |
+
|
31 |
+
- **Developed by:** cookinai
|
32 |
+
- **License:** apache-2.0
|
33 |
+
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
|
34 |
+
|
35 |
+
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
36 |
+
|
37 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|