MolmoAct-7B-D AWQ 4-bit (Text-Only Quantization)
This is a 4-bit AWQ quantized version of allenai/MolmoAct-7B-D-0812 using LLM Compressor.
Key Features
- ✅ Qwen2.5 text decoder quantized (4-bit AWQ) - 63% size reduction
- ✅ SigLip2 vision encoder preserved (FP16) - maintains visual quality
- ✅ Robotic manipulation action reasoning - trained on 10k robot trajectories
- ✅ Smart quantization - Only LLM layers quantized, vision parts untouched
- ✅ 93 unique manipulation tasks supported
Model Details
- Base Model: allenai/MolmoAct-7B-D-0812 (7B parameters)
- Architecture: MolmoAct (Qwen2.5-7B decoder + SigLip2 vision encoder)
- Quantization Method: AWQ (Activation-aware Weight Quantization)
- Quantization Scheme: W4A16 (4-bit weights, 16-bit activations)
- Calibration Dataset: Flickr30k (512 samples)
Size Comparison
Metric | Value |
---|---|
Original (FP16) | ~14.0 GB |
Quantized (W4A16) | ~6.12 GB |
Reduction | ~56.3% |
Memory Saved | ~7.9 GB |
What Was Quantized
Quantized (4-bit):
- Qwen2.5 decoder layers (text/language model)
- Text processing linear layers in the decoder
Preserved (FP16):
- SigLip2 vision encoder (maintains visual understanding quality)
- Vision-text connectors
- Embeddings
- Language model head
This selective quantization ensures that vision understanding quality remains nearly identical to the original model while significantly reducing size.
About MolmoAct-7B-D
MolmoAct-7B-D is an open-source action reasoning model for robotic manipulation developed by the Allen Institute for AI:
- Training Data: 10k high-quality trajectories of a single-arm Franka robot
- Text Decoder: Qwen2.5-7B (state-of-the-art open LLM)
- Vision Encoder: SigLip2 (proven vision backbone)
- Capabilities: 93 unique manipulation tasks
- Use Case: Robotic manipulation and action reasoning
Usage
from transformers import AutoModelForImageTextToText, AutoProcessor, GenerationConfig
from PIL import Image
import requests
# Load model and processor
processor = AutoProcessor.from_pretrained(
"ronantakizawa/molmoact-7b-d-awq-w4a16",
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
model = AutoModelForCausalLM.from_pretrained(
"ronantakizawa/molmoact-7b-d-awq-w4a16",
trust_remote_code=True,
torch_dtype='auto',
device_map='auto'
)
# Process the image and text
inputs = processor.process(
images=[Image.open(requests.get("https://picsum.photos/id/237/536/354", stream=True).raw)],
text="What actions can be performed with the objects in this image?"
)
# Move inputs to the correct device and make a batch of size 1
inputs = {k: v.to(model.device).unsqueeze(0) for k, v in inputs.items()}
# Generate output
output = model.generate_from_batch(
inputs,
GenerationConfig(max_new_tokens=200, stop_strings="<|endoftext|>"),
tokenizer=processor.tokenizer
)
# Decode the generated tokens
generated_tokens = output[0, inputs['input_ids'].size(1):]
generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
Quantization Details
- Method: AWQ (Activation-aware Weight Quantization)
- Independent Pipeline: Used with BasicPipeline for layer-by-layer quantization
- Calibration: 512 Flickr30k image-text pairs
- Max Sequence Length: 2048 tokens
- Why AWQ: Activation-aware quantization preserves important weights
Limitations
- May have slight quality degradation in complex action reasoning compared to FP16
- Vision encoder is NOT quantized (intentional for quality)
- Requires transformers with AWQ support
- Designed for robotic manipulation tasks, not general conversation
Important Notes
Image Requirements
Ensure images are in RGB format:
from PIL import Image
image = Image.open(...)
if image.mode != "RGB":
image = image.convert("RGB")
License
Apache 2.0 (same as base model)
Citation
@misc{molmoact-7b-d-awq,
title={MolmoAct-7B-D AWQ 4-bit},
author={Quantized by ronantakizawa},
year={2025},
url={https://huggingface.co/ronantakizawa/molmoact-7b-d-awq-w4a16}
}
Acknowledgements
- Base model by Allen Institute for AI
- Quantization using LLM Compressor
🤖 Generated with LLM Compressor
- Downloads last month
- 3