lower
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using jeiku/ToxicNoRobotsRosaHermesBoros_3B as a base.
Models Merged
The following models were included in the merge:
- jeiku/ToxicNoRobotsRosaHermesBoros_3B + jeiku/Theory_of_Mind_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B + jeiku/Everything_v3_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B + jeiku/Bluemoon_cleaned_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B + jeiku/Capybara_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B + jeiku/alpaca-cleaned_StableLM
Configuration
The following YAML configuration was used to produce this model:
models:
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/alpaca-cleaned_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Capybara_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Everything_v3_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Theory_of_Mind_StableLM
parameters:
weight: 0.15
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 0.1
density: 1
merge_method: dare_ties
base_model: jeiku/ToxicNoRobotsRosaHermesBoros_3B
parameters:
dtype: bfloat16
- Downloads last month
- 30