Edit model card

This is the 7b Qwen2-VL image model exported via https://github.com/pdufour/llm-export.

Also see https://huggingface.co/pdufour/Qwen2-VL-2B-Instruct-ONNX-Q4-F16 for a 2b model that is onnxruntime-webgpu compatible.

Downloads last month
21
Inference API
Inference API (serverless) does not yet support transformers.js models for this pipeline type.

Model tree for pdufour/Qwen2-VL-7B-Instruct-onnx

Quantized
(10)
this model

Collection including pdufour/Qwen2-VL-7B-Instruct-onnx