Orion-zhen
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,8 @@ pipeline_tag: text-generation
|
|
14 |
|
15 |
This model is fine tuned over gutenberg datasets using kto strategy. It's my first time to use kto strategy, and I'm not sure how the model actually performs.
|
16 |
|
|
|
|
|
17 |
Checkout GGUF here: [Orion-zhen/Qwen2.5-7B-Gutenberg-KTO-Q6_K-GGUF](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Gutenberg-KTO-Q6_K-GGUF)
|
18 |
|
19 |
## Details
|
@@ -39,8 +41,6 @@ To practice the **eco-friendly training**, I utilized various methods, including
|
|
39 |
- batch size: 1
|
40 |
- KTO perf beta: 0.1
|
41 |
|
42 |
-
Compared to those large companies which remove accessories such as charger and cables from packages, I have achieved **real** environment protection by **truly** reducing energy consumption, rather than shifting costs to consumers.
|
43 |
-
|
44 |
### Train log
|
45 |
|
46 |
![training_loss](./training_loss.png)
|
|
|
14 |
|
15 |
This model is fine tuned over gutenberg datasets using kto strategy. It's my first time to use kto strategy, and I'm not sure how the model actually performs.
|
16 |
|
17 |
+
Compared to those large companies which remove accessories such as charger and cables from packages, I have achieved **real** environment protection by **truly** reducing energy consumption, rather than shifting costs to consumers.
|
18 |
+
|
19 |
Checkout GGUF here: [Orion-zhen/Qwen2.5-7B-Gutenberg-KTO-Q6_K-GGUF](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Gutenberg-KTO-Q6_K-GGUF)
|
20 |
|
21 |
## Details
|
|
|
41 |
- batch size: 1
|
42 |
- KTO perf beta: 0.1
|
43 |
|
|
|
|
|
44 |
### Train log
|
45 |
|
46 |
![training_loss](./training_loss.png)
|