Edit model card

Llama 3 8b Instruct MOE

Llama 3 8b Instruct base model converted to MOE style by randomly partitioning the FFN layers of each decoder layer into 8 experts of the same size. Weights are directly taken from the llama3 instruct base model.

Downloads last month
7
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for leonzhou286/llama3_8b_instruct_moe

Finetuned
(441)
this model