Edit model card

#llama3 #silytavern #multimodal #llama3

This is an experimental model! It doesn't look stable, lets just collectively say that this one was a learning experience for now and the next version will be a banger.

Relevant:
These quants have been done after the fixes from llama.cpp/pull/6920 have been merged.
Use KoboldCpp version 1.64 or higher, make sure you're up-to-date.

I apologize for disrupting your experience.
My upload speeds have been cooked and unstable lately.
If you want and you are able to...
You can support my various endeavors here (Ko-fi).

GGUF-IQ-Imatrix quants for ChaoticNeutrals/Puppy_Purpose_0.69.

Imatrix generated from the FP16-GGUF and conversions from the BF16-GGUF.

Author:
"Say hello to your puppy princess, she is pawsitively pleased to play with you!"

Compatible SillyTavern presets here (simple) or here (Virt's Roleplay Presets - recommended).
Use the latest version of KoboldCpp. Use the provided presets for testing.
Feedback and support for the Authors is always welcome.
If there are any issues or questions let me know.

For 8GB VRAM GPUs, I recommend the Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes.

Original model information:

Puppy Purpose 0.69

image/png

Say hello to your puppy princess, she is pawsitively pleased to play with you!

A combination of model merges and lora merges using my signature datasets. I'm not too sure how this one will turn out, I made it for my own usage, but it should serve others well too. This model is compatible with our Chaotic Neutrals Llama3 mmproj files. Good luck and have fun!

Downloads last month
88
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .