Merge pull request #355 from tmm1/bitsandbytes-fixes 35c8b90 unverified tmm1 commited on Aug 11, 2023
Merge pull request #350 from tmm1/group-len-false-examples f5c11f8 unverified tmm1 commited on Aug 9, 2023
ensure enable_input_require_grads is called on model before getting the peft model (#345) 176b888 unverified winglian commited on Aug 6, 2023
experimental llama 2 chat support (#296) 3392270 unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 6, 2023
Update XFormers Attention Monkeypatch to handle Llama-2 70B (GQA) (#339) 10405b9 unverified ssmi153 commited on Aug 6, 2023
Added Orca Mini prompt strategy (#263) c93655c unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 5, 2023
optimize the iteration when tokenizeing large datasets (#332) fe28543 unverified winglian commited on Aug 4, 2023
scope flash-attn+qlora fix correctly, scope to llama, add comment 78b9efb tmm1 commited on Aug 3, 2023
ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype 248bf90 tmm1 commited on Aug 2, 2023
add peft install back since it doesn't get installed by setup.py (#331) db2a358 unverified winglian commited on Jul 31, 2023
latest HEAD of accelerate causes 0 loss immediately w FSDP (#321) 9f69c4d unverified winglian commited on Jul 24, 2023
update prompts for open orca to match the paper (#317) 3d4984b unverified winglian commited on Jul 22, 2023
Merge pull request #307 from OpenAccess-AI-Collective/xgen-user-sharegpt-tokens 40a53ff unverified winglian commited on Jul 22, 2023
Merge pull request #313 from OpenAccess-AI-Collective/tokenizer-llama2-embeddings 3ffb018 unverified winglian commited on Jul 22, 2023
Merge pull request #299 from OpenAccess-AI-Collective/flash-attention-2 a94f2ee unverified winglian commited on Jul 22, 2023
Merge pull request #308 from OpenAccess-AI-Collective/apache2-license 1b63bf1 unverified winglian commited on Jul 21, 2023
better handling since xgen tokenizer breaks with convert_tokens_to_ids 2a428e8 winglian commited on Jul 21, 2023