推理代码报错 ValueError: too many values to unpack (expected 2)
报错信息如下:
Traceback (most recent call last):
File "/public/qth/LLaMA-Factory/test.py", line 27, in
outputs = model.generate(**inputs, **gen_kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate
result = self._sample(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/transformers/generation/utils.py", line 2651, in _sample
outputs = self(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 879, in forward
transformer_outputs = self.transformer(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 775, in forward
hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 609, in forward
layer_ret = layer(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 510, in forward
attention_output, kv_cache = self.self_attention(
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 376, in forward
cache_k, cache_v = kv_cache
ValueError: too many values to unpack (expected 2)
经过排查貌似是因为GLMTransformer类forward处理不当导致,错误处理了元组,导致kv_cache最终传入了字符串past_key_values
在GLMTransformer类的forward函数中增加以下貌似就可以跑了
降级transformers 到4.40.2
降低了 就不兼容其它框架了
现在已经更新到了4.44.0,可以适配
现在已经更新到了4.44.0,可以适配
pull一下本仓库的除了模型文件之外的代码,就行了