How to run speculative decoding of this model with 0.5B model.
How to run speculative decoding of this model with 0.5B model.
I get errors in vllm.
First is that vocab_size is different. So in 0.5B in config.json I set vocab_size=152064 to match with 32B model value.
But now I get error
assert loaded_weight.shape[output_dim] == self.org_vocab_size
I don't see possibility to change output_dim
Is there any way to get around this?
How do you run speculative decoding on vllm with this model?
Can you provide here whole path to run vllm? Preferably from docker.
FYI speculative decoding "just works" with exllamav2 (via TabbyAPI), haven't had any issues using the 1.5b model for the draft.
How to run speculative decoding of this model with 0.5B model.
I get errors in vllm.
First is that vocab_size is different. So in 0.5B in config.json I set vocab_size=152064 to match with 32B model value.
But now I get error
assert loaded_weight.shape[output_dim] == self.org_vocab_sizeI don't see possibility to change output_dim
Is there any way to get around this?
How do you run speculative decoding on vllm with this model?
Can you provide here whole path to run vllm? Preferably from docker.
Just looking at this too, and noticed the 7b
model has the same vocab size, but the 0.5b
, 1.5b
and 3b
are all different?
There is a thread on vllm github addressing to this issue
https://github.com/vllm-project/vllm/issues/10913