Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints

Fine-tune merged model on downstream task

#2
by laelhalawani - opened

Hi, how would one go about peft fine-tuning the merged model on downstream task? Can just use the default hf peft library and finetune like any llama model?

Yeah, you can treat the merged model as any other llama models for further fine-tuning.

Sign up or log in to comment