saishf's picture
Update README.md
8498ac2 verified
|
raw
history blame
1.79 kB
metadata
base_model:
  - Sao10K/Fimbulvetr-10.7B-v1
  - saishf/Kuro-Lotus-10.7B
library_name: transformers
tags:
  - mergekit
  - merge
license: cc-by-nc-4.0

This model is a merge of my personal favourite models, i couldn't decide between them so why not have both? Without MOE cause gpu poor :3

With my own tests it gives kuro-lotus like results without the requirement for a highly detailed character card and stays coherent when roping up to 8K context.

I personally use the "Universal Light" preset in silly tavern, with "alpaca" the results can be short but are longer with "alpaca roleplay".

"Universal Light" preset can be extremely creative but sometimes likes to act for user with some cards, for those i like just the "default" but any preset seems to work!

Benchmarks: Average 72.73

ARC 69.54

HellaSwag 87.87

MMLU 66.99

TruthfulQA 60.95

Winogrande 84.14

GSM8K 66.87

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: saishf/Kuro-Lotus-10.7B
        layer_range: [0, 48]
      - model: Sao10K/Fimbulvetr-10.7B-v1
        layer_range: [0, 48]
merge_method: slerp
base_model: saishf/Kuro-Lotus-10.7B
parameters:
  t:
    - filter: self_attn
      value: [0.6, 0.7, 0.8, 0.9, 1]
    - filter: mlp
      value: [0.4, 0.3, 0.2, 0.1, 0]
    - value: 0.5
dtype: bfloat16