Fine-tuned model
Will the fine-tuned model be released as well?
Currently not on our roadmap, but happy to provide guidance on how to fine-tune this base model!
We can't wait to see what the community will do with it :)
Pretty new to all of this but I'm working on fine-tuning it with LLaMA-Adapter.
I've finished adding the adapter prompts but I'm having trouble figuring out what loss function was used in replit-code-v1-3b finetuning since the output only returns logits.
I understand it was probably finetuned using Composer but there are lots of options in there, do you think you could share the options used for finetuning?
@pirroh What was fine tuned in your fine tuned version?
Will you be providing instructions and/or colab notebook for fine tuning?
Thanks!
Hey @lentan and @brianjking , thanks for your patience!
We just released a detailed guide on how to fine-tune replit-code-v1-3b
-- check the README file on our ReplitLM GitHub repo.
Let us know in case of any issues. Otherwise, happy hacking and post here your results and derivative models π