Untrained Tokens in Qwen2.5

#1
by Abhi31 - opened

I am trying to train this model but getting the following error:

ValueError: Unsloth: Untrained tokens of [[]] found, but embed_tokens & lm_head not trainable, causing NaNs. Restart then add `embed_tokens` & `lm_head` to `FastLanguageModel.get_peft_model(target_modules = [..., "embed_tokens", "lm_head",]). `Are you using the `base` model? Instead, use the `instruct` version to silence this warning.

This is indeed an instruct model so why is this occurring? How can I fix this?

My Training script:

max_seq_length = 16000 
dtype = None 
load_in_4bit = True

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)


model = FastLanguageModel.get_peft_model(
    model,
    r = 16,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 16,
    lora_dropout = 0, 
    bias = "none",    
    use_gradient_checkpointing = "unsloth",
    random_state = 3407,
    use_rslora = False,  
    loftq_config = None,
)

same problem

Unsloth AI org

Interesting will take a look guys thanks

Sign up or log in to comment