Edit model card

SynthIQ

SynthIQ

This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used mergekit to merge models.

Benchmark Name Score
ARC 65.87
HellaSwag 85.82
MMLU 64.75
TruthfulQA 57.00
Winogrande 78.69
GSM8K 64.06
AGIEval 42.67
GPT4All 73.71
Bigbench 44.59

Update - 19/01/2024

Tested to work well with autogen and CrewAI

GGUF Files

Q4_K_M - medium, balanced quality - recommended

Q_6_K - very large, extremely low quality loss

Q8_0 - very large, extremely low quality loss - not recommended

Important Update: SynthIQ is now available on Ollama. You can use it by running the command ollama run stuehieyr/synthiq in your terminal. If you have limited computing resources, check out this video to learn how to run it on a Google Colab backend.

Yaml Config


slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
        layer_range: [0, 32]
      - model: uukuguy/speechless-mistral-six-in-one-7b
        layer_range: [0, 32]

merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1

parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
tokenizer_source: union

dtype: bfloat16

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Nous Benchmark Evalation Results

Detailed results can be found here

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.37
AI2 Reasoning Challenge (25-Shot) 65.87
HellaSwag (10-Shot) 85.82
MMLU (5-Shot) 64.75
TruthfulQA (0-shot) 57.00
Winogrande (5-shot) 78.69
GSM8k (5-shot) 64.06
Downloads last month
71
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sethuiyer/SynthIQ-7b

Datasets used to train sethuiyer/SynthIQ-7b

Evaluation results