Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

exl2 version of Undi95/Mistral-11B-TestBench3
dataset : wikitext
command : python convert.py -i models/Undi95_Mistral-11B-TestBench3 -o Undi95_Mistral-11B-TestBench3-temp -cf Undi95_Mistral-11B-TestBench3-4.0bpw-h8-exl2 -c 0000.parquet -l 4096 -b 4 -hb 8 -ss 4096

Under this sentence is original model card.

slices:
  - sources:
    - model: Norquinal/Mistral-7B-claude-chat
      layer_range: [0, 24]
  - sources:
    - model: Open-Orca/Mistral-7B-OpenOrca
      layer_range: [8, 32]
merge_method: passthrough
dtype: float16

========================================================

slices:
  - sources:
      - model: Undi95/Mistral-11B-CC-Air
        layer_range: [0, 48]
      - model: "/content/drive/MyDrive/Mistral-11B-ClaudeOrca"
        layer_range: [0, 48]
merge_method: slerp
base_model: Undi95/Mistral-11B-CC-Air
parameters:
  t:
    - value: 0.5 # fallback for rest of tensors
dtype: float16

hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-Test), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4

Task Version Metric Value Stderr
arc_challenge 0 acc 0.5401 ± 0.0146
acc_norm 0.5589 ± 0.0145
arc_easy 0 acc 0.8199 ± 0.0079
acc_norm 0.8127 ± 0.0080
hellaswag 0 acc 0.6361 ± 0.0048
acc_norm 0.8202 ± 0.0038
piqa 0 acc 0.8079 ± 0.0092
acc_norm 0.8199 ± 0.0090
truthfulqa_mc 1 mc1 0.3733 ± 0.0169
mc2 0.5374 ± 0.0156
winogrande 0 acc 0.7261 ± 0.0125

image/png

Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.