Text Generation
Transformers
Safetensors
mistral
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
Where's the Sauce?
#1
by
Joseph717171
- opened
Can you please and the merge recipe? I like seeing what worked for you and learning from your work. Thanks, in advance, @Undi95 .
I didn't saved it, but I can like recreate it when I get back on my pc to give you an idea haha
I was dumb in this one, shit
No worries! 😁
base_model: alpindale/Mistral-7B-v0.2-hf
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: alpindale/Mistral-7B-v0.2-hf - layer_range: [0, 32]
model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
weight: 0.37 - layer_range: [0, 32]
model: Undi95/LewdMistral-7B-0.2
parameters:
weight: 0.33 - layer_range: [0, 32]
model: cgato/Thespis-CurtainCall-7b-v0.2.2
parameters:
weight: 0.32 - layer_range: [0, 32]
model: Undi95/Toppy-M-7B
parameters:
weight: 0.15 - layer_range: [0, 32]
model: cgato/Thespis-7b-v0.5-SFTTest-2Epoch
parameters:
weight: 0.38 - layer_range: [0, 32]
model: tavtav/eros-7b-test
parameters:
weight: 0.18
- layer_range: [0, 32]
I think it's the exact recipe, it come from Lemonade, I just replaced/added some of my model.