Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK

Model Overview

  • Model Architecture: Qwen3VLMoeForConditionalGeneration
    • Input: Text, Image
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date:
  • Version: 1.0
  • Model Developers:: Red Hat

Quantized version of Qwen/Qwen3-VL-235B-A22B-Instruct.

Model Optimizations

This model was obtained by quantizing the weights and activations of Qwen/Qwen3-VL-235B-A22B-Instruct to FP8 data type. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks of the language model are quantized.

Deployment

Use with vLLM

  1. Initialize vLLM server:
vllm serve nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK --tensor_parallel_size 8
  1. Send requests to the server:
from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

model = "nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK"

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image_url",
                "image_url": {"url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"},
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

outputs = client.chat.completions.create(
    model=model,
    messages=messages,
)

generated_text = outputs.choices[0].message.content
print(generated_text)

Creation

This model was quantized using the llm-compressor library as shown below.

Creation details
from transformers import AutoProcessor, Qwen3VLMoeForConditionalGeneration

from llmcompressor import oneshot
from llmcompressor.modeling import replace_modules_for_calibration
from llmcompressor.modifiers.quantization import QuantizationModifier

MODEL_ID = "Qwen/Qwen3-VL-235B-A22B-Instruct"

# Load model.
model = Qwen3VLMoeForConditionalGeneration.from_pretrained(MODEL_ID, dtype="auto")
processor = AutoProcessor.from_pretrained(MODEL_ID)
model = replace_modules_for_calibration(model)

# Configure the quantization algorithm and scheme.
# In this case, we:
#   * quantize the weights to fp8 with per-block quantization
#   * quantize the activations to fp8 with dynamic token activations
recipe = QuantizationModifier(
    targets="Linear",
    scheme="FP8_BLOCK",
    ignore=[
        "re:.*lm_head",
        "re:visual.*",
        "re:model.visual.*",
        "re:.*mlp.gate$",
    ],
)

# Apply quantization.
oneshot(model=model, recipe=recipe)

# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-block"
model.save_pretrained(SAVE_DIR)
processor.save_pretrained(SAVE_DIR)

Evaluation

The model was evaluated on the OpenLLMv1 leaderboard task, using lm-evaluation-harness, on reasoning tasks using lighteval. vLLM was used for all evaluations.

Evaluation details

Openllm V1

lm_eval \
  --model vllm \
  --model_args pretrained="nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK",dtype=auto,add_bos_token=True,max_model_len=16384,tensor_parallel_size=4,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True \
  --tasks openllm \
  --write_out \
  --batch_size auto \
  --output_path $output_path/openllm.json \
  --show_config

Openllm V2

lm_eval \
  --model vllm \
  --model_args pretrained="nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK",dtype=auto,add_bos_token=False,max_model_len=16384,tensor_parallel_size=4,gpu_memory_utilization=0.7,disable_log_stats=True,enable_chunked_prefill=True,trust_remote_code=True \
  --tasks leaderboard \
  --apply_chat_template \
  --fewshot_as_multiturn \
  --write_out \
  --batch_size auto \
  --output_path $output_path/leaderboard.json \
  --show_config

Coding Benchmarks

evalplus.evaluate --model "nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK" \
                  --dataset "humaneval" \
                  --backend vllm \
                  --tp 4 \
                  --greedy
evalplus.evaluate --model "nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK" \
                --dataset "mbpp" \
                --backend vllm \
                --tp 4 \
                --greedy

Accuracy

Category Metric Qwen/Qwen3-VL-235B-A22B-Instruct nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK Recovery (%)
OpenLLM V1 ARC-Challenge (Acc-Norm, 25-shot) 76.19 76.28 100.11
GSM8K (Strict-Match, 5-shot) 41.24 41.70 101.10
HellaSwag (Acc-Norm, 10-shot) 87.89 87.65 99.73
MMLU (Acc, 5-shot) 87.15 87.25 100.11
TruthfulQA (MC2, 0-shot) 63.08 63.24 100.26
Winogrande (Acc, 5-shot) 82.00 81.85 99.81
Average Score 72.92 73.00 100.11
OpenLLM V2 IFEval (Inst Level Strict Acc, 0-shot) 91.01 90.29 99.21
BBH (Acc-Norm, 3-shot) 73.72 73.95 100.31
Math-Hard (Exact-Match, 4-shot) 61.71 20.69 33.54
GPQA (Acc-Norm, 0-shot) 32.13 32.89 102.35
MUSR (Acc-Norm, 0-shot) 42.06 41.80 99.37
MMLU-Pro (Acc, 5-shot) 65.82 65.65 99.73
Average Score 61.07 54.21 88.77
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK

Finetuned
(3)
this model

Collection including nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK