madwind-wizard-7B-GGUF
This is a merge of pre-trained 7B language models created using mergekit.
The intended goal of this merge was to combine the 32K context window of Mistral v0.2 base with the richness and strength of the Zephyr Beta and WizardLM 2 models. This was a mixed-precision merge, promoting Mistral v0.2 base from fp16 to bf16.
The result can be used for text generation. Note that Zephyr Beta training removed in-built alignment from datasets, resulting in a model more likely to generate problematic text when prompted. This merge appears to have inherited that feature.
- Full weights: grimjim/madwind-wizard-7B
- GGUF quants: grimjim/madwind-wizard-7B-GGUF
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: alpindale/Mistral-7B-v0.2-hf
layer_range: [0,32]
- model: grimjim/zephyr-beta-wizardLM-2-merge-7B
layer_range: [0,32]
merge_method: slerp
base_model: alpindale/Mistral-7B-v0.2-hf
parameters:
t:
- value: 0.5
dtype: bfloat16
- Downloads last month
- 145
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for grimjim/madwind-wizard-7B-GGUF
Base model
grimjim/madwind-wizard-7BCollection including grimjim/madwind-wizard-7B-GGUF
Collection
Select models helpfully quantized by others as well as myself
•
59 items
•
Updated
•
2