![FocusMix 7B](/Nelathan/Qwen2-7b-FocusMix-GGUF/resolve/main/focusmix.jpg)
FocusMix 7B GGUF
Using llama.cpp release b3557 for static quantization.
Original model: https://huggingface.co/Nelathan/Qwen2-7B-FocusMix
ChatML
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Nelathan/Qwen2-7b-FocusMix-GGUF
Base model
Nelathan/Qwen2-7B-FocusMix