YamWizard28-7B
idk
Thanks mradermacher for the quants!
Quants
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: fearlessdots/WizardLM-2-7B-abliterated
layer_range: [0, 32]
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: fearlessdots/WizardLM-2-7B-abliterated
parameters:
t:
- filter: self_attn
value: [0.1, 0.6, 0.3, 0.8, 0.5]
- filter: mlp
value: [0.9, 0.4, 0.7, 0.2, 0.5]
- value: 0.5
dtype: bfloat16
Prompt Format (Alpaca):
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
### Instruction:
{prompt}
### Response:
{output}
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for v000000/YamWizard28-7B
Merge model
this model