|
--- |
|
base_model: |
|
- Sao10K/Fimbulvetr-11B-v2 |
|
- Undi95/Mistral-11B-CC-Air-RP |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- π |
|
--- |
|
# Fimbul-Airo-18B π |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). π |
|
|
|
I tested it for thirtneen.second π |
|
|
|
Works pretty good, seems uncensored. I'll update with more results/observations as I continue to test. |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the passthrough merge method. Taking a buncha models and smashing em all together π |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) π |
|
* [Undi95/Mistral-11B-CC-Air-RP](https://huggingface.co/Undi95/Mistral-11B-CC-Air-RP) π |
|
* [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) |
|
* [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b/) |
|
* PIPPA dataset 11B qlora |
|
* LimaRPv3 dataset 11B qlora |
|
|
|
|
|
### The Sauce |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: Sao10K/Fimbulvetr-11B-v2 |
|
layer_range: [0, 40] |
|
- sources: |
|
- model: Undi95/Mistral-11B-CC-Air-RP |
|
layer_range: [8, 48] |
|
merge_method: passthrough |
|
dtype: bfloat16 |
|
|
|
π |
|
|
|
``` |
|
|
|
### Prompt Format: Alpaca π |
|
|
|
``` |
|
### Instruction: |
|
<Prompt> |
|
|
|
### Input: |
|
<Insert Context Here> |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
π |
|
|
|
Don't forget to take care of yourself and have a wonderful day! |