Speculator Models
					Collection
				
				12 items
				• 
				Updated
					
				•
					
					3
This is a speculator model designed for use with Qwen/Qwen3-8B, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Aeala/ShareGPT_Vicuna_unfiltered and the HuggingFaceH4/ultrachat_200k datasets.
The model was trained with thinking turned disabled.
This model should be used with the Qwen/Qwen3-8B chat template, specifically through the /chat/completions endpoint.
vllm serve Qwen/Qwen3-8B \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-8B-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'
| Use Case | Dataset | Number of Samples | 
|---|---|---|
| Coding | HumanEval | 168 | 
| Math Reasoning | gsm8k | 80 | 
| Text Summarization | CNN/Daily Mail | 80 | 
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 | k=6 | k=7 | 
|---|---|---|---|---|---|---|---|
| Coding | 1.72 | 2.17 | 2.39 | 2.59 | 2.60 | 2.59 | 2.69 | 
| Math Reasoning | 1.73 | 2.20 | 2.48 | 2.63 | 2.72 | 2.79 | 2.81 | 
| Text Summarization | 1.62 | 1.96 | 2.13 | 2.24 | 2.25 | 2.29 | 2.30 | 
 
   
   
  Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/SpeculativeDecoding" \
  --rate-type sweep \
  --max-seconds 600 \
  --output-path "Qwen3-8B-HumanEval.json" \
  --backend-args '{"extra_body": {"chat_completions": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'
</details>