Sao10K commited on
Commit
c68f226
·
verified ·
1 Parent(s): b932e08

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: qwen
5
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
6
+ base_model: Qwen/Qwen2.5-72B-Instruct
7
+ tags:
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: 72B-Qwen2.5-Kunou-v1-run2
11
+ results: []
12
+ ---
13
+
14
+
15
+ # 72B-Qwen2.5-Kunou-v1
16
+
17
+ I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes.
18
+ <br>Same with the 14 and 32B version.
19
+ <br>Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm...
20
+
21
+ A kind-of successor to L3-70B-Euryale-v2.2 in all but name? I'm keeping Stheno/Euryale lineage to Llama series for now.
22
+ <br>I had a version made on top of Nemotron, a supposed Euryale 2.4 but that flopped hard, it was not my cup of tea.
23
+ <br>This version is basically a better, more cleaned up Dataset used on Euryale and Stheno.
24
+
25
+ Recommended Model Settings | *Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.*
26
+ ```
27
+ Prompt Format: ChatML
28
+ Temperature: 1.1
29
+ min_p: 0.1
30
+ ```
31
+
32
+
33
+ Future-ish plans:
34
+ <br>\- Complete this model series.
35
+ <br>\- Further refine the Datasets used for quality, more secondary chats, more creative-related domains. (Inspired by Drummer)
36
+ <br>\- Work on my other incomplete projects. About half a dozen on the backburner for a while now.
37
+
38
+ Special thanks to my wallet for funding this, my juniors who share a single braincell between them, and my current national service. Holidays = more calls.
39
+
40
+ Also sorry for the inactivity. Life was in the way. It still is, just less so, for now.
41
+
42
+ https://sao10k.carrd.co/ for contact.
43
+
44
+ ---
45
+
46
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
47
+ <details><summary>See axolotl config</summary>
48
+
49
+ axolotl version: `0.5.2`
50
+ ```yaml
51
+ base_model: Qwen/Qwen2.5-72B-Instruct
52
+ model_type: AutoModelForCausalLM
53
+ tokenizer_type: AutoTokenizer
54
+
55
+ load_in_8bit: false
56
+ load_in_4bit: false
57
+ strict: false
58
+ sequence_len: 16384
59
+ bf16: auto
60
+ fp16:
61
+ tf32: false
62
+ flash_attention: true
63
+
64
+ adapter: lora
65
+ lora_model_dir:
66
+ lora_r: 128
67
+ lora_alpha: 16
68
+ lora_dropout: 0.1
69
+ lora_target_linear: true
70
+ lora_fan_in_fan_out:
71
+ peft_use_rslora: true
72
+
73
+ # Data
74
+ dataset_prepared_path: last_run_prepared
75
+ datasets:
76
+ - path: datasets/amoral-full-sys-prompt.json # Unalignment Data
77
+ type: customchatml
78
+ - path: datasets/mimi-superfix-RP-filtered-fixed.json # RP / Creative-Instruct Data
79
+ type: customchatml
80
+ - path: datasets/hespera-smartshuffle.json # esperus-v2-Instruct Data
81
+ type: customchatml
82
+ warmup_steps: 15
83
+
84
+ plugins:
85
+ - axolotl.integrations.liger.LigerPlugin
86
+ liger_rope: true
87
+ liger_rms_norm: true
88
+ liger_layer_norm: true
89
+ liger_glu_activation: true
90
+ liger_fused_linear_cross_entropy: true
91
+
92
+ # Iterations
93
+ num_epochs: 1
94
+
95
+ # Batching
96
+ gradient_accumulation_steps: 4
97
+ micro_batch_size: 1
98
+ gradient_checkpointing: "unsloth"
99
+
100
+ # Optimizer
101
+ optimizer: paged_ademamix_8bit
102
+ lr_scheduler: cosine
103
+ learning_rate: 0.000004
104
+ weight_decay: 0.1
105
+ max_grad_norm: 25.0
106
+
107
+ # Iterations
108
+ num_epochs: 1
109
+
110
+ # Misc
111
+ deepspeed: ./deepspeed_configs/zero3_bf16.json
112
+ ```
113
+
114
+ </details><br>