tuantran1632001's picture
Adding Evaluation Results (#5)
31d3d99 verified
|
raw
history blame
4.89 kB
metadata
license: other
tags:
  - merge
  - mergekit
  - lazymergekit
  - microsoft/Orca-2-13b
  - KoboldAI/LLaMA2-13B-Psyfighter2
base_model:
  - KoboldAI/LLaMA2-13B-Psyfighter2
  - microsoft/Orca-2-13b
license_name: microsoft-research-license
model-index:
  - name: Psyfighter2-Orca2-13B-ties
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 62.46
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 81.74
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 60.31
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 55.4
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 77.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 43.67
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=tuantran1632001/Psyfighter2-Orca2-13B-ties
          name: Open LLM Leaderboard

Psyfighter2-Orca2-ties

Psyfighter2-Orca2-ties is a merge of the following models using mergekit:

This is my very first merge I have ever attempted. The motivation behind this merge is to try and create a 13B version of jebcarter/psyonic-cetacean-20B. I don't have a good GPU (GTX 1660 6GB), so although I can merge the model, I cannot actually run it. However, the Open LLM Leaderboard ranks this merge with 63.48 avg point, which is higher than both KoboldAI/LLaMA2-13B-Psyfighter2 and jebcarter/psyonic-cetacean-20B, so I must did something right. The next step is to quantize this merge into GGUF so I can actually run it with KoboldCpp.

🧩 Configuration

models:
  - model: KoboldAI/LLaMA2-13B-Psyfighter2
  - model: microsoft/Orca-2-13b
    parameters:
      density: 0.40
      weight: [0, 0.3, 0.7, 1]
merge_method: ties
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
  normalize: true
  int8_mask: true
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.48
AI2 Reasoning Challenge (25-Shot) 62.46
HellaSwag (10-Shot) 81.74
MMLU (5-Shot) 60.31
TruthfulQA (0-shot) 55.40
Winogrande (5-shot) 77.27
GSM8k (5-shot) 43.67