Edit model card

Description

This repo contains fp16 files of Mistral-RP-0.1-7B.

Some exemple of output

Here is the recipe:

slices:
  - sources:
      - model: migtissera/Synthia-7B-v1.3
        layer_range: [0, 32]
      - model: Undi95/Mistral-small_pippa_limaRP-v3-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: migtissera/Synthia-7B-v1.3
parameters:
  t:
    - filter: lm_head
      value: [0.75]
    - filter: embed_tokens
      value: [0.75]
    - filter: self_attn
      value: [0.75, 0.25]
    - filter: mlp
      value: [0.25, 0.75]
    - filter: layernorm
      value: [0.5, 0.5]
    - filter: modelnorm
      value: [0.75]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Tool used : https://github.com/cg123/mergekit/tree/yaml

Model and lora used

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

If you want to support me, you can here.

Downloads last month
21
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.