Speculator Models
Collection
12 items
•
Updated
•
3
This is a speculator model designed for use with meta-llama/Llama-3.3-70B-Instruct, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Aeala/ShareGPT_Vicuna_unfiltered and the train_sft split of HuggingFaceH4/ultrachat_200k datasets.
This model should be used with the meta-llama/Llama-3.3-70B-Instruct chat template, specifically through the /chat/completions endpoint.
vllm serve meta-llama/Llama-3.3-70B-Instruct \
-tp 4 \
--speculative-config '{
"model": "RedHatAI/Llama-3.3-70B-Instruct-speculator.eagle3",
"num_speculative_tokens": 3,
"method": "eagle3"
}'
| Use Case | Dataset | Number of Samples |
|---|---|---|
| Coding | HumanEval | 168 |
| Math Reasoning | gsm8k | 80 |
| Text Summarization | CNN/Daily Mail | 80 |
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 | k=6 | k=7 |
|---|---|---|---|---|---|---|---|
| Coding | 1.84 | 2.53 | 3.07 | 3.42 | 3.71 | 3.89 | 4.00 |
| Math Reasoning | 1.81 | 2.43 | 2.88 | 3.17 | 3.30 | 3.42 | 3.53 |
| Text Summarization | 1.71 | 2.21 | 2.52 | 2.74 | 2.83 | 2.87 | 2.89 |
Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/SpeculativeDecoding" \
--rate-type sweep \
--max-seconds 240 \
--output-path "Llama-3.3-70B-Instruct-HumanEval.json" \
--backend-args '{"extra_body": {"chat_completions": {"temperature":0.0}}}'
</details>