Inv's picture
Upload folder using huggingface_hub
65998d5 verified
|
raw
history blame
2.19 kB
metadata
base_model:
  - mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
  - mergekit
  - merge
  - mistralai/Mistral-7B-v0.1
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
  - maywell/PiVoT-0.1-Evil-a
  - mlabonne/ArchBeagle-7B
  - NeverSleep/Noromaid-7B-0.4-DPO

konstanta-final

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model (to reproduce use mergekit-mega command):


base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: dare_ties
parameters:
  int8_mask: true
slices:
- sources:
  - layer_range: [0, 32]
    model: mistralai/Mistral-7B-v0.1
  - layer_range: [0, 32]
    model: : SanjiWatsuki/Kunoichi-DPO-v2-7B
    parameters:
      density: 0.8
      weight: 0.5
  - layer_range: [0, 32]
    model: : maywell/PiVoT-0.1-Evil-a
    parameters:
      density: 0.3
      weight: 0.15
name: first-step
---
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
merge_method: dare_ties
parameters:
  int8_mask: true
slices:
- sources:
  - layer_range: [0, 32]
    model: mistralai/Mistral-7B-v0.1
  - layer_range: [0, 32]
    model: mlabonne/ArchBeagle-7B
    parameters:
      density: 0.8
      weight: 0.75
  - layer_range: [0, 32]
    model: LakoMoor/Silicon-Alice-7B
    parameters:
      density: 0.6
      weight: 0.30
name: second-step
---
models:
   - model: first-step
   - model: second-step
merge_method: slerp
base_model: first-step
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
  int8_mask: true
  normalize: true
dtype: float16