Edit model card

image/png

Description

This repository hosts FP16 files for Loyal-Toppy-Bruins-Maid-7B, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time.

Its foundation is Starling-LM-7B-alpha, notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates rwitz/go-bruins-v2, a Q-bert/MetaMath-Cybertron-Starling derivative with Alpaca RP data tuning.

The other foundational model is chargoddard/loyal-piano-m7, chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.

Undi95/Toppy-M-7B, known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on OpenRouter for a good reason.

NeverSleep/Noromaid-7b-v0.1.1, a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model.

The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the MergeKit GitHub Repo.

Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on lilblam's LLM Logic Test. My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much 😊

The sauce

models: # Top-Loyal-Bruins-Maid-DARE-7B_v2
  - model: mistralai/Mistral-7B-v0.1
    # no parameters necessary for base model
  - model: rwitz/go-bruins-v2 # MetamathCybertronStarling base
    parameters:
      weight: 0.5
      density: 0.6
  - model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
    parameters:
      weight: 0.5
      density: 0.6
  - model: Undi95/Toppy-M-7B
    parameters:
      weight: 0.1
      density: 0.5
  - model: NeverSleep/Noromaid-7b-v0.1.1
    parameters:
      weight: 0.1
      density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Prompt template: Custom format, or Alpaca

Custom format:

I found the best SillyTavern results from using the Noromaid template.

SillyTavern config files: Context, Instruct.

Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.

Alpaca:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:
Downloads last month
815
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE

Merges
4 models