NOTE: THIS QUANTIZATION IS BROKEN
Yi 34B Merge v8
A merge of several Yi 34B 200K models using the new DARE Ties method via mergekit, quantized with exllamav2 on ~300K tokens of a sci-fi story, a fantasy story, and a vicuna chat for optimal long context storywriting performance.
See the main model card: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8
Prompt template: Orca-Vicuna
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
Running
24GB GPUs can run 3.1bpw Yi-34B-200K models at 75K context with exllamav2, and performant UIs like exui. I go into more detail in this post
Being a Yi model, try running a lower temperature with 0.05+ MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull the huge vocabulary.
Quantization Commands
First pass:
python /home/alpha/AI/exllamav2/convert.py --in_dir /home/alpha/FastModels/v8/v8 -o /home/alpha/FastModels/scratch -om /home/alpha/FastModels/v8meas.json --cal_dataset /home/alpha/Documents/stories.parquet -ml 32768 -mr 8 -ss 4096 -b 4.0 -hb 6 -nr
Second pass:
python /home/alpha/AI/exllamav2/convert.py --in_dir /home/alpha/FastModels/v8/v8 -o /home/alpha/FastModels/scratch -m /home/alpha/FastModels/v8meas.json --cal_dataset /home/alpha/Documents/stories.parquet -l 12288 -r 26 -ml 32768 -mr 8 -ss 4096 -b 4.0 -hb 6 -cf /home/alpha/FastModels/v8-exl2-4bpw-fiction -nr
- Downloads last month
- 15