|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- meta-math/MetaMathQA |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- Math |
|
--- |
|
|
|
## Optimus-7B |
|
|
|
<img src="_c3f4a76b-c0b1-4fba-9537-33f8fd697f2d.jpg" width="300" height="200" alt="Optimus-7B"> |
|
|
|
Fine-tuned On [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) |
|
|
|
You can use ChatML format. |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [Here](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/Q-bert/Optimus-7B/results_2023-12-04T18-59-49.207215.json) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 69.09 | |
|
| ARC (25-shot) | 65.44 | |
|
| HellaSwag (10-shot) | 85.41 | |
|
| MMLU (5-shot) | 63.61 | |
|
| TruthfulQA (0-shot) | 55.79 | |
|
| Winogrande (5-shot) | 78.77 | |
|
| GSM8K (5-shot) | 65.50 | |
|
|
|
|