Spaces:
Sleeping
Sleeping
ClipCaptionModel errors
#2
by
xiaoxin666
- opened
Hi, when running the following code of caption.py in PATH (openshape-demo-support/openshape/demo)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
prefix_length = 10
model = ClipCaptionModel(prefix_length)
# print(model.gpt_embedding_size)
model.load_state_dict(torch.load(hf_hub_download('OpenShape/clipcap-cc', 'conceptual_weights.pt', token=True), map_location='cpu'))
report error
RuntimeError: Error(s) in loading state_dict for ClipCaptionModel:
Unexpected key(s) in state_dict: "gpt.transformer.h.0.attn.bias", "gpt.transformer.h.0.attn.masked_bias", "gpt.transformer.h.1.attn.bias",
"gpt.transformer.h.1.attn.masked_bias", "gpt.transformer.h.2.attn.bias", "gpt.transformer.h.2.attn.masked_bias", "gpt.transformer.h.3.attn.bias",
"gpt.transformer.h.3.attn.masked_bias", "gpt.transformer.h.4.attn.bias", "gpt.transformer.h.4.attn.masked_bias", "gpt.transformer.h.5.attn.bias",
"gpt.transformer.h.5.attn.masked_bias", "gpt.transformer.h.6.attn.bias", "gpt.transformer.h.6.attn.masked_bias", "gpt.transformer.h.7.attn.bias",
"gpt.transformer.h.7.attn.masked_bias", "gpt.transformer.h.8.attn.bias", "gpt.transformer.h.8.attn.masked_bias", "gpt.transformer.h.9.attn.bias",
"gpt.transformer.h.9.attn.masked_bias", "gpt.transformer.h.10.attn.bias", "gpt.transformer.h.10.attn.masked_bias", "gpt.transformer.h.11.attn.bias",
"gpt.transformer.h.11.attn.masked_bias".
what is wrong?
It seems that the package "transformers" has recently been updated. Using the old version 4.29.2 can resolve the problem.
Closing as marked as solved in github issue.
eliphatfs
changed discussion status to
closed