metadata
library_name: transformers
license: other
datasets:
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
- ResplendentAI/Synthetic_Soul_1k
language:
- en
tags:
- gguf
- quantized
- roleplay
- imatrix
- mistral
- merge
inference: false
This repository hosts GGUF-Imatrix quantizations for Test157t/Eris-Daturamix-7b.
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
To be uploaded:
quantization_options = [
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q6_K",
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
This is experimental.
For imatrix data generation, kalomaze's groups_merged.txt
with added roleplay chats was used, you can find it here.
The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios.
Original model information:
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Test157t/Eris-Floramix-7b
layer_range: [0, 32]
- model: ResplendentAI/Datura_7B
layer_range: [0, 32]
merge_method: slerp
base_model: Test157t/Eris-Floramix-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16