SynthIQ
This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used mergekit to merge models.
Benchmark Name | Score |
---|---|
ARC | 65.87 |
HellaSwag | 85.82 |
MMLU | 64.75 |
TruthfulQA | 57.00 |
Winogrande | 78.69 |
GSM8K | 64.06 |
AGIEval | 42.67 |
GPT4All | 73.71 |
Bigbench | 44.59 |
Update - 19/01/2024
Tested to work well with autogen and CrewAI
GGUF Files
Q4_K_M - medium, balanced quality - recommended
Q_6_K - very large, extremely low quality loss
Q8_0 - very large, extremely low quality loss - not recommended
Important Update: SynthIQ is now available on Ollama. You can use it by running the command ollama run stuehieyr/synthiq
in your
terminal. If you have limited computing resources, check out this video to learn how to run it on
a Google Colab backend.
Yaml Config
slices:
- sources:
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
layer_range: [0, 32]
- model: uukuguy/speechless-mistral-six-in-one-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
Prompt template: ChatML
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Nous Benchmark Evalation Results
Detailed results can be found here
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 69.37 |
AI2 Reasoning Challenge (25-Shot) | 65.87 |
HellaSwag (10-Shot) | 85.82 |
MMLU (5-Shot) | 64.75 |
TruthfulQA (0-shot) | 57.00 |
Winogrande (5-shot) | 78.69 |
GSM8k (5-shot) | 64.06 |
- Downloads last month
- 71
Model tree for sethuiyer/SynthIQ-7b
Datasets used to train sethuiyer/SynthIQ-7b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.870
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.820
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.750
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.000
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.060