-
warshanks/Qwen3-30B-A3B-Instruct-2507-AWQ
Text Generation • 5B • Updated • 72 -
warshanks/Qwen3-16B-A3B-abliterated-AWQ
Text Generation • 3B • Updated • 50 -
warshanks/Huihui-Qwen3-14B-abliterated-v2-AWQ
Text Generation • 3B • Updated • 1.01k • 2 -
warshanks/Qwen3-8B-abliterated-AWQ
Text Generation • 2B • Updated • 11
Ben Shankles PRO
warshanks
AI & ML interests
MLX, AWQ, GPTQ
Recent Activity
liked
a model
17 days ago
unsloth/Apriel-1.5-15b-Thinker-GGUF
new activity
about 1 month ago
LiquidAI/LFM2-350M-ENJP-MT-GGUF:Sensitive to Quantization
updated
a model
about 2 months ago
warshanks/Hermes-4-14B-AWQ
Organizations
AWQ Quants
W4A16 quants made with llmcompressor
MedGemma MLX
MedGemma MLX conversions I've done
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 325 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 32 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 92 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 290 • 1
DeepSeek MLX
DeepSeek MLX conversions I've done
-
mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit
Text Generation • 1B • Updated • 963 • 4 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-6bit
Text Generation • 8B • Updated • 43 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-8bit
Text Generation • 2B • Updated • 301 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-bf16
Text Generation • 8B • Updated • 276 • • 2
Nemotron MLX
Nemotron MLX conversions I've done
-
mlx-community/AceReason-Nemotron-7B-4bit
Text Generation • 1B • Updated • 49 -
mlx-community/AceReason-Nemotron-7B-8bit
Text Generation • 2B • Updated • 5 -
mlx-community/AceReason-Nemotron-7B-bf16
Text Generation • 8B • Updated • 9 -
mlx-community/AceReason-Nemotron-1.1-7B-4bit
Text Generation • 1B • Updated • 7
Abliterated MLX
Abliterated model conversions to MLX
-
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 24 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-8bit
Text Generation • 2B • Updated • 20 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-bf16
Text Generation • 8B • Updated • 20 • 1 -
mlx-community/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 359 • 1
Menlo Research AWQ
Menlo Research Quantizations
Mistral Quants
Mistral Quantizations
Lingshu MLX
Lingshu MLX conversions
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 113 -
mlx-community/Lingshu-7B-4bit
Image-Text-to-Text • 1B • Updated • 10 -
mlx-community/Lingshu-7B-6bit
Image-Text-to-Text • Updated • 5 -
mlx-community/Lingshu-7B-8bit
Image-Text-to-Text • Updated • 8
Medical MLX
Healthcare oriented LLM conversions to MLX
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 325 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 32 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 92 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 290 • 1
TheDrummer MLX
TheDrummer MLX model conversions I've done
MiMo-VL MLX-VLM
Qwen 3 AWQ
-
warshanks/Qwen3-30B-A3B-Instruct-2507-AWQ
Text Generation • 5B • Updated • 72 -
warshanks/Qwen3-16B-A3B-abliterated-AWQ
Text Generation • 3B • Updated • 50 -
warshanks/Huihui-Qwen3-14B-abliterated-v2-AWQ
Text Generation • 3B • Updated • 1.01k • 2 -
warshanks/Qwen3-8B-abliterated-AWQ
Text Generation • 2B • Updated • 11
Menlo Research AWQ
Menlo Research Quantizations
AWQ Quants
W4A16 quants made with llmcompressor
Mistral Quants
Mistral Quantizations
MedGemma MLX
MedGemma MLX conversions I've done
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 325 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 32 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 92 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 290 • 1
Lingshu MLX
Lingshu MLX conversions
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 113 -
mlx-community/Lingshu-7B-4bit
Image-Text-to-Text • 1B • Updated • 10 -
mlx-community/Lingshu-7B-6bit
Image-Text-to-Text • Updated • 5 -
mlx-community/Lingshu-7B-8bit
Image-Text-to-Text • Updated • 8
DeepSeek MLX
DeepSeek MLX conversions I've done
-
mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit
Text Generation • 1B • Updated • 963 • 4 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-6bit
Text Generation • 8B • Updated • 43 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-8bit
Text Generation • 2B • Updated • 301 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-bf16
Text Generation • 8B • Updated • 276 • • 2
Medical MLX
Healthcare oriented LLM conversions to MLX
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 325 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 32 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 92 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 290 • 1
Nemotron MLX
Nemotron MLX conversions I've done
-
mlx-community/AceReason-Nemotron-7B-4bit
Text Generation • 1B • Updated • 49 -
mlx-community/AceReason-Nemotron-7B-8bit
Text Generation • 2B • Updated • 5 -
mlx-community/AceReason-Nemotron-7B-bf16
Text Generation • 8B • Updated • 9 -
mlx-community/AceReason-Nemotron-1.1-7B-4bit
Text Generation • 1B • Updated • 7
TheDrummer MLX
TheDrummer MLX model conversions I've done
Abliterated MLX
Abliterated model conversions to MLX
-
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 24 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-8bit
Text Generation • 2B • Updated • 20 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-bf16
Text Generation • 8B • Updated • 20 • 1 -
mlx-community/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 359 • 1
MiMo-VL MLX-VLM