--- library_name: transformers license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE base_model: Qwen/Qwen2.5-72B-Instruct tags: - generated_from_trainer model-index: - name: 72B-Qwen2.5-Kunou-v1 results: [] --- ![Kunou](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1/resolve/main/knn.png) **Sister Versions for Lightweight Use!** [32B-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1) [14B-Kunou-v1](https://huggingface.co/Sao10K/14B-Qwen2.5-Kunou-v1) # 72B-Qwen2.5-Kunou-v1 I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes.
Same with the 14 and 32B version.
Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm... A kind-of successor to L3-70B-Euryale-v2.2 in all but name? I'm keeping Stheno/Euryale lineage to Llama series for now.
I had a version made on top of Nemotron, a supposed Euryale 2.4 but that flopped hard, it was not my cup of tea.
This version is basically a better, more cleaned up Dataset used on Euryale and Stheno. Recommended Model Settings | *Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway.* ``` Prompt Format: ChatML Temperature: 1.1 min_p: 0.1 ``` Future-ish plans:
\- Complete this model series.
\- Further refine the Datasets used for quality, more secondary chats, more creative-related domains. (Inspired by Drummer)
\- Work on my other incomplete projects. About half a dozen on the backburner for a while now. Special thanks to my wallet for funding this, my juniors who share a single braincell between them, and my current national service.
Stay safe. There have been more emergency calls, more incidents this holiday season. Also sorry for the inactivity. Life was in the way. It still is, just less so, for now. Burnout is a thing, huh? https://sao10k.carrd.co/ for contact. --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.5.2` ```yaml base_model: Qwen/Qwen2.5-72B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false sequence_len: 16384 bf16: auto fp16: tf32: false flash_attention: true adapter: lora lora_model_dir: lora_r: 128 lora_alpha: 16 lora_dropout: 0.1 lora_target_linear: true lora_fan_in_fan_out: peft_use_rslora: true # Data dataset_prepared_path: last_run_prepared datasets: - path: datasets/amoral-full-sys-prompt.json # Unalignment Data - Cleaned Up from Original, Split to its own file type: customchatml - path: datasets/mimi-superfix-RP-filtered-fixed.json # RP / Creative-Instruct Data type: customchatml - path: datasets/hespera-smartshuffle.json # Hesperus-v2-Instruct Data type: customchatml warmup_steps: 15 plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: true # Iterations num_epochs: 1 # Batching gradient_accumulation_steps: 4 micro_batch_size: 1 gradient_checkpointing: "unsloth" # Optimizer optimizer: paged_ademamix_8bit lr_scheduler: cosine learning_rate: 0.000004 weight_decay: 0.1 max_grad_norm: 25.0 # Iterations num_epochs: 1 # Misc deepspeed: ./deepspeed_configs/zero3_bf16.json ```