Gonzo-Chat-7B
Gonzo-Chat-7B is a merged LLM based on Mistral v0.01 with a 8192 Context length that likes to chat, roleplay, work with agents, do some lite programming, and then beat the brakes off you in the back alley...
The BEST Open Source 7B Street Fighting LLM of 2024!!!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 66.63 |
AI2 Reasoning Challenge (25-Shot) | 65.02 |
HellaSwag (10-Shot) | 85.40 |
MMLU (5-Shot) | 63.75 |
TruthfulQA (0-shot) | 60.23 |
Winogrande (5-shot) | 77.74 |
GSM8k (5-shot) | 47.61 |
LLM-Colosseum Results
All contestents fought using the same LLM-Colosseum default settings. Each contestant fought 25 rounds with every other contestant.
https://github.com/OpenGenerativeAI/llm-colosseum
Gonzo-Chat-7B .vs Mistral v0.2, Dolphon-Mistral v0.2, Deepseek-Coder-6.7b-instruct
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO as a base.
Models Merged
The following models were included in the merge:
- Nondzu/Mistral-7B-Instruct-v0.2-code-ft
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
Configuration
The following YAML configuration was used to produce this model:
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
# No parameters necessary for base model
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
parameters:
density: 0.53
weight: 0.4
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
density: 0.53
weight: 0.3
- model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 83
Model tree for Badgids/Gonzo-Chat-7B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.020
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.400
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.750
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard60.230
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.740
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard47.610