Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
2.25b6h
-- 2.25bpw, 6bit lm_head3.7b6h
-- 3.7bpw, 6bit lm_head6b6h
-- 6bpw, 6bit lm_head
Requires ExllamaV2 version 0.0.12 and up.
Original model link: Sao10K/Solstice-Mixtral-v1
Original model README below.
GGUF: https://huggingface.co/Sao10K/Solstice-Mixtral-v1-GGUF
Solstice-11B-v1 but on Mixtral. More info there.
Experimental. May or may not be good, Mixtral training is... difficult to work with.
Trained with Vicuna / ShareGPT Format, but Alpaca Instruct should work fine too.
As per usual, handles itself fine in NSFW Scenarios, after all, it is trained in lewd outputs. A bit of a weird behaviour where it is reluctant in zero-shot settings, but in actual roleplays / usage? It's fine.
Pretty nice. Using Vicuna gave slightly better outputs than Alpaca, but it may be a minor difference?
I like that it stays in character.
I like using Universal-Light preset in SillyTavern.
I really appreciate your feedback / supportive comments. They keep me going.
Support me here :)
Model tree for rAIfle/Solstice-Mixtral-v1-exl2-rpcal
Base model
mistralai/Mixtral-8x7B-v0.1