Using KV Cache when the new input is more than one token

#2
by skoneru - opened

Hello,

I am having a problem when using KV cache with Paligemma models. It looks like based on the code line here, it is only done with new inputs one token at a time. However, If one would want to cache the prompt for tasks such as reranking or so on, we should be able to cache dynamic lengths as it is supported for models like Llama. Is there a possibility that this may be added in the future?

Google org

Hi @skoneru , Sorry for late response. Your observation is correct—Paligemma models, as they stand, seem to cache tokens one at a time. This token-by-token caching method can be inefficient for certain use cases like reranking, where reusing cached information from longer sequences would improve performance. Models like Llama indeed allow caching of dynamic lengths, which provides better flexibility and efficiency. Thank you.

Sign up or log in to comment