Metis-0.3 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
d2fd892 verified
|
raw
history blame
4.25 kB
metadata
license: apache-2.0
metrics:
  - accuracy
base_model: mistralai/Mistral-7B-Instruct-v0.2
inference: false
license_name: apache-2.0
model-index:
  - name: Metis-0.3
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 62.71
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 84.8
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.92
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 67.56
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 77.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 39.35
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Mihaiii/Metis-0.3
          name: Open LLM Leaderboard

Built with Axolotl

An instruct based fine tune of mistralai/Mistral-7B-Instruct-v0.2.

It works well with long system prompts.

It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.

This model is trained on a private dataset. The high GSM8K score is NOT because of the MetaMath dataset.

Prompt Format (see the guidelines from the base model):

<s>[INST] {system_message} . Say "Acknowledged!" if you understood. [/INST] Acknowledged! </s> [INST] {prompt} [/INST]

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 65.44
AI2 Reasoning Challenge (25-Shot) 62.71
HellaSwag (10-Shot) 84.80
MMLU (5-Shot) 60.92
TruthfulQA (0-shot) 67.56
Winogrande (5-shot) 77.27
GSM8k (5-shot) 39.35