Molmo-7B-D BnB 4bit quant 30GB -> 7GB

approx. 12GB VRAM required

base model for more information:

https://huggingface.co/allenai/Molmo-7B-D-0924

example code:

https://github.com/cyan2k/molmo-7b-bnb-4bit

performance metrics & benchmarks to compare with base will follow over the next week

Downloads last month
7
Safetensors
Model size
4.67B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API does not yet support model repos that contain custom code.

Model tree for detect-tech/molmo-7B-D-bnb-4bit

Base model

Qwen/Qwen2-7B
Quantized
(5)
this model