GGUF llama.cpp quantized version of:

Recommended Prompt Format (mistral-v7-tekken)

<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
Downloads last month
72
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support