Model imatrix quants as requested at #36 for Sao10K/Fimbulvetr-11B-v2.
Prompt Format: Alpaca or Vicuna.
An absolute classic and highly popular roleplay model, now with newer quants as requested directly.
Imatrix data was generated from the FP16-GGUF and conversions as well since the original model weights are already FP16.
Using the latest version of llama.cpp at the time - b2774.
- Downloads last month
- 2,846
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API has been turned off for this model.
Model tree for Lewdiculous/Fimbulvetr-11B-v2-GGUF-IQ-Imatrix
Base model
Sao10K/Fimbulvetr-11B-v2