Getting an error while fine tuneing "'NoneType' object is not subscriptable"

#39
by mdnasif - opened

Screenshot 2024-11-11 134327.png
my code:
if compute_dtype == torch.float16 and use_4bit:
major, _ = torch.cuda.get_device_capability()
if major >= 8:
print("=" * 80)
print("Your GPU supports bfloat16: accelerate training with bf16=True")
print("=" * 80)

Load LoRA configuration

peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj", "v_proj"]
)

Set training parameters

training_arguments = TrainingArguments(
output_dir=output_dir,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
weight_decay=weight_decay,
fp16=fp16,
bf16=bf16,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=group_by_length,
lr_scheduler_type=lr_scheduler_type,
report_to="tensorboard",

)
tokenizer.chat_template = "default"

Set supervised fine-tuning parameters

trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,

)

model.config.use_cache = False

Train model

trainer.train()
I got error on trainer.train()

Llava Hugging Face org

@mdnasif I updated the configs on the hub yesterday. Can you try again with the latest transformers version?

If it doesn't work, can you share a reproducible code with a small dataset and the model that you are using?

Thank you sir. I will inform you.

Sign up or log in to comment