Edit model card

art by myself using AI ayam

Ayam 2x8B

Another MoE, this time using L3.
Recipe: Sao's Stheno-v3.2 + L3 8B Instruct.

This model is intended for personal use but I think it's really good and worth sharing. Stheno-v3.2 is, as you probably know well, very good. In creative writing, RP and ERP it's far better than L3 Instruct and to be honest it's the best L3 finetunes I've tried so far so yeah I liked it very much. But while playing with it, I feel like the model is (a bit) dumber than L3 Instruct. It can't understand complex scenario well and confused in multi-char scenario, at least that what I was experiencing. So yeah, I tried to improve its intelligence while still preserving its creativity.

Why MoE not merge?
Well... 2 model working together is always better than merging it into one. And (surprisingly) the result is far exceeded my expectations.
And I think merging models sometimes can damage It's quality.

Testing condition (using SillyTavern):
Context and Instruct: Llama 3 Instruct.
Sampler:

Temperature : 1.15
MinP : 0.075
TopK : 50
Other is disabled.

Switch

BF16 - GGUF

Downloads last month
12
Safetensors
Model size
13.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for R136a1/Ayam-2x8B

Quantizations
3 models