--- base_model: - Sao10K/Fimbulvetr-11B-v2 - Undi95/Mistral-11B-CC-Air-RP library_name: transformers tags: - mergekit - merge - 👍 --- ![cute](https://huggingface.co/matchaaaaalatte/Fimbul-Airo-18B/blob/main/cute.png) # Fimbul-Airo-18B 👍 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). 👍 I tested it for thirtneen.second 👍 Works pretty good, seems uncensored. I'll update with more results/observations as I continue to test. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. Taking models and smashing em all together 👍 ### Models Merged The following models were included in the merge: * [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) 👍 * [Undi95/Mistral-11B-CC-Air-RP](https://huggingface.co/Undi95/Mistral-11B-CC-Air-RP) 👍 * [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) * [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b/) * PIPPA dataset 11B qlora * LimaRPv3 dataset 11B qlora ### The Sauce The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Sao10K/Fimbulvetr-11B-v2 layer_range: [0, 40] - sources: - model: Undi95/Mistral-11B-CC-Air-RP layer_range: [8, 48] merge_method: passthrough dtype: bfloat16 👍 ``` ### Prompt Format: Alpaca 👍 ``` ### Instruction: ### Input: ### Response: ``` 👍 Don't forget to take care of yourself and have a wonderful day!