Error(s) in loading state_dict for PeftModelForCausalLM:

#23
by rameshch - opened

I had fine-tuned Llama-3.2-1B-Instruct model on a custom dataset and when i tried to load this adapter as below

peft_model = PeftModel.from_pretrained(model, adapter_path)

I am getting error
Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.model.embed_tokens.weight: copying a param with shape torch.Size([128258, 2048]) from checkpoint, the shape in current model is torch.Size([128256, 2048])

Pls assist

This comment has been hidden

Sorry. It was a mistake at my end as i missed re-initializingup the tokenizer as below
_, tokenizer = setup_chat_format(model, tokenizer).

rameshch changed discussion status to closed

Sign up or log in to comment