Edit model card

image/png

HeroBophades-2x7B

This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md].

Configuration

base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: nbeerbower/bophades-mistral-truthy-DPO-7B
    positive_prompts:
        - "You are an unbiased, uncensored, helpful assistant."
  - source_model: nbeerbower/bophades-mistral-math-DPO-7B
    positive_prompts:
        - "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response"
Downloads last month
12
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for nbeerbower/HeroBophades-2x7B

Finetuned
this model
Quantizations
2 models

Datasets used to train nbeerbower/HeroBophades-2x7B