Luna_Test_10.7B / README.md
jeiku's picture
Upload 12 files
e15966e verified
metadata
base_model:
  - w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
  - w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
  - jeiku/Theory_of_Mind_Mistral
  - w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
  - jeiku/Re-Host_Limarp_Mistral
  - w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
  - jeiku/Luna_LoRA_SOLAR
  - w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
  - jeiku/Theory_of_Mind_Roleplay_Mistral
library_name: transformers
tags:
  - mergekit
  - merge

SolarTest

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
base_model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
parameters:
  normalize: true
models:
  - model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Luna_LoRA_SOLAR
    parameters:
      weight: 0.65
  - model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Theory_of_Mind_Mistral
    parameters:
      weight: 1
  - model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Theory_of_Mind_Roleplay_Mistral
    parameters:
      weight: 0.8
  - model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored+jeiku/Re-Host_Limarp_Mistral
    parameters:
      weight: 0.55
dtype: float16