Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,32 @@ model-index:
|
|
17 |
This model was converted to GGUF format from [`Sao10K/14B-Qwen2.5-Freya-x1`](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
18 |
Refer to the [original model card](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1) for more details on the model.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## Use with llama.cpp
|
21 |
Install llama.cpp through brew (works on Mac and Linux)
|
22 |
|
|
|
17 |
This model was converted to GGUF format from [`Sao10K/14B-Qwen2.5-Freya-x1`](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
18 |
Refer to the [original model card](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1) for more details on the model.
|
19 |
|
20 |
+
---
|
21 |
+
Model details:
|
22 |
+
-
|
23 |
+
I decided to mess around with training methods again, considering the re-emegence of methods like multi-step training. Some people began doing it again, and so, why not? Inspired by AshhLimaRP's methology but done it my way.
|
24 |
+
|
25 |
+
Freya-S1
|
26 |
+
|
27 |
+
LoRA Trained on ~1.1GB of literature and raw text over Qwen 2.5's base model.
|
28 |
+
Cleaned text and literature as best as I could, still, may have had issues here and there.
|
29 |
+
|
30 |
+
Freya-S2
|
31 |
+
|
32 |
+
The first LoRA was applied over Qwen 2.5 Instruct, then I trained on top of that.
|
33 |
+
Reduced LoRA rank because it's mainly instruct and other details I won't get into.
|
34 |
+
|
35 |
+
Recommended Model Settings | Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.
|
36 |
+
|
37 |
+
Prompt Format: ChatML
|
38 |
+
Temperature: 1+ # I don't know, man.
|
39 |
+
min_p: 0.05
|
40 |
+
|
41 |
+
Training time in total was ~10 Hours on a 8xH100 Node, sponsored by the Government of Singapore or something. Thanks for the national service allowance, MHA.
|
42 |
+
|
43 |
+
https://sao10k.carrd.co/ for contact.
|
44 |
+
|
45 |
+
---
|
46 |
## Use with llama.cpp
|
47 |
Install llama.cpp through brew (works on Mac and Linux)
|
48 |
|