support tokenised prompt (online vllm)

#17

Online vLLM inference passes an already pre-processed text prompt to the multimodal preprocessor.

This comment has been hidden
Payoto changed pull request status to open
Payoto changed pull request status to closed

Sign up or log in to comment