LlamaSlerp1-8B / mergekit_config.yml
allknowingroger's picture
Upload folder using huggingface_hub
ca5b17a verified
raw
history blame contribute delete
275 Bytes
models:
- model: allenai/Llama-3.1-Tulu-3-8B
- model: DreadPoor/BaeZel-8B-LINEAR
merge_method: slerp
base_model: allenai/Llama-3.1-Tulu-3-8B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers