metadata
base_model:
- Sao10K/Fimbulvetr-11B-v2
- Undi95/Mistral-11B-CC-Air-RP
library_name: transformers
tags:
- mergekit
- merge
- π
Fimbul-Airo-18B π
This is a merge of pre-trained language models created using mergekit. π
I tested it for thirtneen.second π
Works pretty good, seems uncensored. I'll update with more results/observations as I continue to test.
Merge Details
Merge Method
This model was merged using the passthrough merge method. Taking models and smashing em all together π
Models Merged
The following models were included in the merge:
- Sao10K/Fimbulvetr-11B-v2 π
- Undi95/Mistral-11B-CC-Air-RP π
- CollectiveCognition-v1.1-Mistral-7B
- airoboros-mistral2.2-7b
- PIPPA dataset 11B qlora
- LimaRPv3 dataset 11B qlora
The Sauce
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Sao10K/Fimbulvetr-11B-v2
layer_range: [0, 40]
- sources:
- model: Undi95/Mistral-11B-CC-Air-RP
layer_range: [8, 48]
merge_method: passthrough
dtype: bfloat16
π
Prompt Format: Alpaca π
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
π
Don't forget to take care of yourself and have a wonderful day!