..untested.. only available for evaluation
IntelligentEstate/Jari-7B-Q4_K_M-GGUF
After the experts over at IBM highjacked the work of our counterparts here in the ๐ค we thought it prudent to abliterate their finely crafted granite and set it back upon the masses as a 4 headed beast and as they continue so will we, rinse remark on that which is remarkable and repeat..
This model was converted to GGUF format from huihui-ai/Huihui-granite-4.0-h-tiny-abliterated using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to IBMs Granite hybrid model suite for foundation information as well as the abliteration process for further reference. Overall enjoy.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
- Downloads last month
- 79
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for fuzzy-mittenz/Huihui-granite-4.0-h-tiny-abliterated-Q4_K_M-GGUF
Base model
ibm-granite/granite-4.0-h-tiny