File size: 1,066 Bytes
79f2f62
 
e2c6bb0
 
 
 
 
 
 
 
 
3fbb682
e2c6bb0
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: apache-2.0
---

# Model mera-mix-4x7B

This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference. 

mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).

## OpenLLM Eval

|                            Model                            | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
|[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01|    88.82|63.67|     77.45|     84.61|71.65|  76.37|

Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)