Still not tested yet, if you have feedback it would be very nice (;

Downloads last month
16,809
GGUF
Model size
17B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/Wan2.2-VACE-Fun-A14B-GGUF

Quantized
(1)
this model

Collection including QuantStack/Wan2.2-VACE-Fun-A14B-GGUF