don't pass rope_scaling kwarg if it's None (#383) 919246f unverified winglian commited on Aug 13, 2023
try to detect accelerate and only use device_map=None in that case (#373) 094fc2c unverified tmm1 commited on Aug 13, 2023
Attention mask and position id fixes for packing (#285) 2bb0b78 unverified winglian commited on Aug 12, 2023
ensure enable_input_require_grads is called on model before getting the peft model (#345) 176b888 unverified winglian commited on Aug 6, 2023
scope flash-attn+qlora fix correctly, scope to llama, add comment 78b9efb tmm1 commited on Aug 3, 2023
ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype 248bf90 tmm1 commited on Aug 2, 2023
add peft install back since it doesn't get installed by setup.py (#331) db2a358 unverified winglian commited on Jul 31, 2023
don't use llama if trust_remote_code is set since that needs to use AutoModel path 66afb76 winglian commited on Jul 8, 2023
Merge pull request #187 from OpenAccess-AI-Collective/strip-peft-device-map 93dacba unverified winglian commited on Jun 12, 2023
Merge pull request #177 from NanoCode012/fix/landmark-patch 8002ffb unverified winglian commited on Jun 12, 2023
Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref 0124825 unverified winglian commited on Jun 10, 2023
fix for local variable 'LlamaForCausalLM' referenced before assignment 14163c1 winglian commited on Jun 10, 2023