reorder options so debug can happen in the same prepare step f98e173 winglian commited on May 16, 2023
move filter to before saving so it doesn't happen everytime, update runpod manual script 0d28df0 winglian commited on May 14, 2023
support for multi line inference input, log sweep over learning rates 9105935 winglian commited on May 3, 2023
quickstart instructions for starting from runpod (#5) 0a472e1 unverified winglian commited on Apr 18, 2023
WIP large refactor to make finetune script a little more manageable (#3) 6045345 unverified winglian commited on Apr 18, 2023
suppport for alpaca-like instruction datasets without inputs e107643 winglian commited on Apr 18, 2023
casts the prepared data to int16 (doesn't help with training memory) 2db9436 winglian commited on Apr 18, 2023
fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets 87e073d winglian commited on Apr 17, 2023
deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches d1aed4c winglian commited on Apr 16, 2023