--- base_model: - NeverSleep/Noromaid-13B-0.4-DPO - cgato/Thespis-13b-DPO-v0.7 - KoboldAI/LLaMA2-13B-Estopia - Doctor-Shotgun/cat-v1.0-13b - BlueNipples/TimeCrystal-l2-13B - TheBloke/Llama-2-13B-fp16 tags: - mergekit - merge --- # EstopianMaid This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) by Katy. ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base. ### Models Merged The following models were included in the merge: * [NeverSleep/Noromaid-13B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-13B-0.4-DPO) * [cgato/Thespis-13b-DPO-v0.7](https://huggingface.co/cgato/Thespis-13b-DPO-v0.7) * [KoboldAI/LLaMA2-13B-Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia) * [Doctor-Shotgun/cat-v1.0-13b](https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b) * [BlueNipples/TimeCrystal-l2-13B](https://huggingface.co/BlueNipples/TimeCrystal-l2-13B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: TheBloke/Llama-2-13B-fp16 dtype: float16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 40] model: TheBloke/Llama-2-13B-fp16 - layer_range: [0, 40] model: BlueNipples/TimeCrystal-l2-13B parameters: weight: 0.75 - layer_range: [0, 40] model: cgato/Thespis-13b-DPO-v0.7 parameters: weight: 0.23 - layer_range: [0, 40] model: KoboldAI/LLaMA2-13B-Estopia parameters: weight: 0.15 - layer_range: [0, 40] model: NeverSleep/Noromaid-13B-0.4-DPO parameters: weight: 0.2 - layer_range: [0, 40] model: Doctor-Shotgun/cat-v1.0-13b parameters: weight: 0.03 ```