codelion commited on
Commit
2a2e839
1 Parent(s): 3fbb682

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -9,6 +9,8 @@ while been comparable to it across different benchmarks. You can use it as a dro
9
 
10
  mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
11
 
 
 
12
  ## OpenLLM Eval
13
 
14
  | Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
 
9
 
10
  mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
11
 
12
+ You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
13
+
14
  ## OpenLLM Eval
15
 
16
  | Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|