mera-mix-4x7B / README.md
codelion's picture
Adding Evaluation Results
c4a6c19 verified
|
raw
history blame
4.42 kB
---
license: apache-2.0
model-index:
- name: mera-mix-4x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.17
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B
name: Open LLM Leaderboard
---
# Model mera-mix-4x7B
This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
## OpenLLM Eval
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
|[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01| 88.82|63.67| 77.45| 84.61|71.65| 76.37|
Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meraGPT__mera-mix-4x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.91|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |89.17|
|MMLU (5-Shot) |64.44|
|TruthfulQA (0-shot) |77.17|
|Winogrande (5-shot) |85.64|
|GSM8k (5-shot) |66.11|