--- license: cc-by-4.0 language: - en base_model: - SanjiWatsuki/Kunoichi-7B - SanjiWatsuki/Silicon-Maid-7B - KatyTheCutie/LemonadeRP-4.5.3 - Sao10K/Fimbulvetr-11B-v2 library_name: transformers tags: - mergekit - merge - mistral - text-generation - roleplay --- ![cute](https://huggingface.co/FallenMerick/Chewy-Lemon-Cookie-11B/resolve/main/Chewy-Lemon-Cookie.png) # Chewy-Lemon-Cookie-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants: * https://huggingface.co/backyardai/Chewy-Lemon-Cookie-11B-GGUF * https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-GGUF Weighted/Imatrix quants: * https://huggingface.co/mradermacher/Chewy-Lemon-Cookie-11B-i1-GGUF ## Merge Details ### Merge Method This model was merged using the following methods: * passthrough * [task arithmetic](https://arxiv.org/abs/2212.04089) ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) ### Configuration The following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 24] - sources: - model: SanjiWatsuki/Silicon-Maid-7B layer_range: [8, 24] - sources: - model: KatyTheCutie/LemonadeRP-4.5.3 layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 name: Big-Lemon-Cookie-11B-BF16 --- models: - model: Big-Lemon-Cookie-11B-BF16 parameters: weight: 0.85 - model: Sao10K/Fimbulvetr-11B-v2 parameters: weight: 0.15 merge_method: task_arithmetic base_model: Big-Lemon-Cookie-11B-BF16 dtype: bfloat16 name: Chewy-Lemon-Cookie-11B ```