Description
This repo contains quantized files of Mistral-RP-0.1-7B.
Here is the recipe:
slices:
- sources:
- model: migtissera/Synthia-7B-v1.3
layer_range: [0, 32]
- model: Undi95/Mistral-small_pippa_limaRP-v3-7B
layer_range: [0, 32]
merge_method: slerp
base_model: migtissera/Synthia-7B-v1.3
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: float16
Tool used : https://github.com/cg123/mergekit/tree/yaml
Model and lora used
Prompt template: Alpaca
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
LimaRP v3 usage and suggested settings
You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:
If you want to support me, you can here.
- Downloads last month
- 29