aegolius-acadicus-24b-v2 / mergekit_moe_config.yml
ibivibiv's picture
Create mergekit_moe_config.yml
bcdd3af verified
raw
history blame
814 Bytes
base_model: senseable/WestLake-7B-v2
gate_mode: hidden
experts:
- source_model: ibivibiv/temp_merged_mistral
positive_prompts:
- "logical reasoning"
- "fact-checking"
negative_prompts:
- "commonsense reasoning"
- "mathematical reasoning"
- source_model: cognitivecomputations/WestLake-7B-v2-laser
positive_prompts:
- "commonsense reasoning"
- "emotional intelligence"
negative_prompts:
- "logical reasoning"
- "scientific knowledge"
- source_model: andysalerno/openchat-nectar-0.5
positive_prompts:
- "multidisciplinary knowledge"
negative_prompts:
- "natural language understanding"
- source_model: PetroGPT/WestSeverus-7B-DPO
positive_prompts:
- "mathematical reasoning"
negative_prompts:
- "natural language understanding"