fix packing so that concatenated sequences reset the attention 9b8585d winglian commited on May 31, 2023
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix 2d0ba3b unverified winglian commited on May 31, 2023
Merge pull request #120 from OpenAccess-AI-Collective/model-from-path c7021e1 unverified winglian commited on May 31, 2023
Update src/axolotl/monkeypatch/llama_attn_hijack_xformers.py 1076bcb unverified winglian Nanobit commited on May 31, 2023
Update src/axolotl/monkeypatch/llama_attn_hijack_xformers.py 2daa683 unverified winglian Nanobit commited on May 31, 2023
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit 6cb2310 winglian commited on May 31, 2023
split up llama model loading so config can be loaded from base config and models can be loaded from a path 2520ecd winglian commited on May 31, 2023
Update src/axolotl/prompt_strategies/alpaca_instruct.py c17dae6 Nanobit winglian commited on May 29, 2023
new hf_use_auth_token setting so login to hf isn't required 1c33eb8 winglian commited on May 28, 2023