--- base_model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B library_name: transformers tags: - mergekit - merge license: other language: - en pipeline_tag: text-generation --- # QuantFactory/Poppy_Porpoise-1.4-L3-8B-GGUF This is quantized version of [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B) created using llama.cpp # Model Description "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30. [Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets). ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Pp-72xra1 layer_range: [0, 32] - model: Nitral-AI/Poppy-1.35-Phase1 layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/Pp-72xra1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```