miqu-1-70b-sf / README.md
152334H's picture
Adding Evaluation Results (#23)
d1caed0 verified
|
raw
history blame
18.9 kB
metadata
language:
  - en
model-index:
  - name: miqu-1-70b-sf
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 73.04
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 88.61
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.49
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 69.38
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 85.32
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 67.7
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 51.82
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 43.81
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 10.8
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 13.42
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 17.21
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 35.87
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=152334H/miqu-1-70b-sf
          name: Open LLM Leaderboard

update: added NOMERGE license


this is miqu-1-70b, dequantised from q5 to f16 && transposed to pytorch. shapes have been rotated less wrongly than in alpindale/miqu-1-70b-pytorch

usage

from transformers import LlamaForCausalLM as LLM, LlamaTokenizer as LT

lt = LT.from_pretrained("152334H/miqu-1-70b-sf")
t = lt("[INST] eloquent high camp prose about a cute catgirl [/INST]", return_tensors='pt').input_ids.cuda()

llm = LLM.from_pretrained("152334H/miqu-1-70b-sf", device_map='auto') # note: you may need many gpus for this
out = llm.generate(t, use_cache=False, max_new_tokens=200)
print(lt.decode(out[0]))

result:

<s> [INST] eloquent high camp prose about a cute catgirl [/INST] In the resplendent realm of high camp, where irony and extravagance dance in a dazzling pas de deux, there exists a creature of such enchanting allure that she captivates the hearts and minds of all who behold her. This beguiling figure, a vision of feline grace and innocence, is none other than the inimitable catgirl.

With her delicate features and winsome smile, she is the embodiment of a dream, a living testament to the power of imagination and the boundless possibilities of the human spirit. Her eyes, those twin orbs of sapphire fire, sparkle with a mischievous intelligence that belies her diminutive stature. They are windows into a soul that is at once ancient and eternally young, a soul that has traversed the vast expanse of time and space to find solace in the warm embrace of human companion

this roughly (but not entirely) matches the llama.cpp q5 result:

$ ./main -ngl 99 -m ./miqu-*q5* --color --temp 0.0 -n -1 -p '[INST] eloquent high camp prose about a cute catgirl [/INST]'
...
 [INST] eloquent high camp prose about a cute catgirl [/INST] In the resplendent realm of high camp, where irony and extravagance dance in a dazzling pas de deux, there exists a creature so enchantingly adorable that she captures the hearts of all who behold her. This is no ordinary feline, but rather a vision of elegance and whimsy combined: the cute catgirl.

With her delicate features framed by an ethereal halo of pastel tresses, this darling diva prowls through life with the grace of a prima ballerina and the playfulness of a kitten. Her eyes, twin pools of sapphire or emerald, sparkle with mischief and intelligence as they survey their surroundings, ever alert for the next grand adventure or delightful prank.

Her ensemble is a symphony of ruffles, bows, and lace, each detail painstakingly chosen to accentuate her lithe form and play up her feline charms. A frilly apron adorned with paw prints sways gently as she moves, while dainty ears perched atop her head twitch in response to every sound. Her gloved hands, so petite and perfect, seem made for holding teacups or sketching delicate portraits of her many admirers.

But do not be fooled by her diminutive stature and sweet demeanor; beneath that fluffy exterior lies the heart of a lioness. Fiercely loyal and protective, she will stop at nothing to defend those she loves from harm. And when the situation calls for it, she can unleash a ferocious roar that belies her cute exterior.

Indeed, the cute catgirl is a paradox wrapped in ruffles and ribbons, a living embodiment of the high camp aesthetic. She revels in the absurdity of her existence, finding joy in every outrageous situation and turning even the most mundane tasks into opportunities for sartorial expression. In her world, there is no such thing as too much glitter or too many bows; more is always more, and excess is a virtue to be celebrated.

So let us raise our teacups in honor of this fabulous feline, this queen of camp who reminds us that life is too short for dull clothing and boring hairstyles. May we all strive to embody her spirit, embracing the absurdity of existence with open arms and a generous helping of glitter. Long live the cute catgirl! [end of text]

some benchmarks

|    Tasks     |Version|Filter|n-shot|  Metric  |Value |   |Stderr|
|--------------|------:|------|-----:|----------|-----:|---|-----:|
|lambada_openai|      1|none  |     0|perplexity|2.6354|±  |0.0451|
|              |       |none  |     0|acc       |0.7879|±  |0.0057|


|  Tasks  |Version|Filter|n-shot| Metric |Value |   |Stderr|
|---------|------:|------|-----:|--------|-----:|---|-----:|
|hellaswag|      1|none  |     0|acc     |0.6851|±  |0.0046|
|         |       |none  |     0|acc_norm|0.8690|±  |0.0034|

|  Tasks   |Version|Filter|n-shot|Metric|Value |   |Stderr|
|----------|------:|------|-----:|------|-----:|---|-----:|
|winogrande|      1|none  |     0|acc   |0.7987|±  |0.0113|

|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|------:|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|      2|get-answer|     5|exact_match|0.7043|±  |0.0126|

|                 Tasks                 |Version|Filter|n-shot|Metric|Value |   |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu                                   |N/A    |none  |     0|acc   |0.7401|±  |0.1192|
| - humanities                          |N/A    |none  |     0|acc   |0.7018|±  |0.1281|
|  - formal_logic                       |      0|none  |     0|acc   |0.4841|±  |0.0447|
|  - high_school_european_history       |      0|none  |     0|acc   |0.8303|±  |0.0293|
|  - high_school_us_history             |      0|none  |     0|acc   |0.9020|±  |0.0209|
|  - high_school_world_history          |      0|none  |     0|acc   |0.9198|±  |0.0177|
|  - international_law                  |      0|none  |     0|acc   |0.8678|±  |0.0309|
|  - jurisprudence                      |      0|none  |     0|acc   |0.8519|±  |0.0343|
|  - logical_fallacies                  |      0|none  |     0|acc   |0.8344|±  |0.0292|
|  - moral_disputes                     |      0|none  |     0|acc   |0.8121|±  |0.0210|
|  - moral_scenarios                    |      0|none  |     0|acc   |0.5642|±  |0.0166|
|  - philosophy                         |      0|none  |     0|acc   |0.8167|±  |0.0220|
|  - prehistory                         |      0|none  |     0|acc   |0.8611|±  |0.0192|
|  - professional_law                   |      0|none  |     0|acc   |0.5854|±  |0.0126|
|  - world_religions                    |      0|none  |     0|acc   |0.8889|±  |0.0241|
| - other                               |N/A    |none  |     0|acc   |0.7889|±  |0.0922|
|  - business_ethics                    |      0|none  |     0|acc   |0.7900|±  |0.0409|
|  - clinical_knowledge                 |      0|none  |     0|acc   |0.8113|±  |0.0241|
|  - college_medicine                   |      0|none  |     0|acc   |0.7514|±  |0.0330|
|  - global_facts                       |      0|none  |     0|acc   |0.5500|±  |0.0500|
|  - human_aging                        |      0|none  |     0|acc   |0.7848|±  |0.0276|
|  - management                         |      0|none  |     0|acc   |0.8835|±  |0.0318|
|  - marketing                          |      0|none  |     0|acc   |0.9145|±  |0.0183|
|  - medical_genetics                   |      0|none  |     0|acc   |0.7500|±  |0.0435|
|  - miscellaneous                      |      0|none  |     0|acc   |0.8838|±  |0.0115|
|  - nutrition                          |      0|none  |     0|acc   |0.7974|±  |0.0230|
|  - professional_accounting            |      0|none  |     0|acc   |0.5922|±  |0.0293|
|  - professional_medicine              |      0|none  |     0|acc   |0.8272|±  |0.0230|
|  - virology                           |      0|none  |     0|acc   |0.5361|±  |0.0388|
| - social_sciences                     |N/A    |none  |     0|acc   |0.8414|±  |0.0514|
|  - econometrics                       |      0|none  |     0|acc   |0.6491|±  |0.0449|
|  - high_school_geography              |      0|none  |     0|acc   |0.8990|±  |0.0215|
|  - high_school_government_and_politics|      0|none  |     0|acc   |0.9430|±  |0.0167|
|  - high_school_macroeconomics         |      0|none  |     0|acc   |0.7795|±  |0.0210|
|  - high_school_microeconomics         |      0|none  |     0|acc   |0.8277|±  |0.0245|
|  - high_school_psychology             |      0|none  |     0|acc   |0.9064|±  |0.0125|
|  - human_sexuality                    |      0|none  |     0|acc   |0.8626|±  |0.0302|
|  - professional_psychology            |      0|none  |     0|acc   |0.8056|±  |0.0160|
|  - public_relations                   |      0|none  |     0|acc   |0.7636|±  |0.0407|
|  - security_studies                   |      0|none  |     0|acc   |0.8204|±  |0.0246|
|  - sociology                          |      0|none  |     0|acc   |0.8856|±  |0.0225|
|  - us_foreign_policy                  |      0|none  |     0|acc   |0.9100|±  |0.0288|
| - stem                                |N/A    |none  |     0|acc   |0.6505|±  |0.1266|
|  - abstract_algebra                   |      0|none  |     0|acc   |0.4100|±  |0.0494|
|  - anatomy                            |      0|none  |     0|acc   |0.6444|±  |0.0414|
|  - astronomy                          |      0|none  |     0|acc   |0.8224|±  |0.0311|
|  - college_biology                    |      0|none  |     0|acc   |0.8681|±  |0.0283|
|  - college_chemistry                  |      0|none  |     0|acc   |0.5500|±  |0.0500|
|  - college_computer_science           |      0|none  |     0|acc   |0.6200|±  |0.0488|
|  - college_mathematics                |      0|none  |     0|acc   |0.4200|±  |0.0496|
|  - college_physics                    |      0|none  |     0|acc   |0.5392|±  |0.0496|
|  - computer_security                  |      0|none  |     0|acc   |0.8300|±  |0.0378|
|  - conceptual_physics                 |      0|none  |     0|acc   |0.7362|±  |0.0288|
|  - electrical_engineering             |      0|none  |     0|acc   |0.7034|±  |0.0381|
|  - elementary_mathematics             |      0|none  |     0|acc   |0.5503|±  |0.0256|
|  - high_school_biology                |      0|none  |     0|acc   |0.8742|±  |0.0189|
|  - high_school_chemistry              |      0|none  |     0|acc   |0.6256|±  |0.0341|
|  - high_school_computer_science       |      0|none  |     0|acc   |0.8400|±  |0.0368|
|  - high_school_mathematics            |      0|none  |     0|acc   |0.4370|±  |0.0302|
|  - high_school_physics                |      0|none  |     0|acc   |0.5033|±  |0.0408|
|  - high_school_statistics             |      0|none  |     0|acc   |0.6944|±  |0.0314|
|  - machine_learning                   |      0|none  |     0|acc   |0.5982|±  |0.0465|

no i do not know why the stderr is high. plausibly it is due to the vllm backend used. this is my lm-eval command in most cases (works on h100):

lm_eval --model vllm --model_args pretrained=./miqu-1-70b-sf,tensor_parallel_size=4,dtype=auto,gpu_memory_utilization=0.88,data_parallel_size=2 --tasks mmlu --batch_size 20

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.59
AI2 Reasoning Challenge (25-Shot) 73.04
HellaSwag (10-Shot) 88.61
MMLU (5-Shot) 75.49
TruthfulQA (0-shot) 69.38
Winogrande (5-shot) 85.32
GSM8k (5-shot) 67.70

LICENSE

NOMERGE License

Copyright (c) 2024 152334H

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, NOT merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

All tensors ("weights") provided by the Software shall not be conjoined with
other tensors ("merging") unless given explicit permission by the license holder.
Utilities including but not limited to "mergekit", "MergeMonster", are forbidden
from use in conjunction with this Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.82
IFEval (0-Shot) 51.82
BBH (3-Shot) 43.81
MATH Lvl 5 (4-Shot) 10.80
GPQA (0-shot) 13.42
MuSR (0-shot) 17.21
MMLU-PRO (5-shot) 35.87