recommend padding when using sample packing (#531) 3437149 unverified winglian commited on Sep 6, 2023
Add support for GPTQ using native transformers/peft (#468) 3355706 unverified winglian commited on Sep 5, 2023
pad_to_worst_case_seq_len boolean, for testing memory limits (#498) 8e197f6 unverified Birch-san tmm1 commited on Aug 28, 2023
Feat(cfg): Add code-llama configs for all sizes (#479) 3513071 unverified mhenrichsen mhenrichsen commited on Aug 27, 2023
new llama-2 default settings (#370) fdffef5 unverified mhenrichsen Mads Henrichsen commited on Aug 14, 2023
Add wandb_entity to wandb options, update example configs, update README (#361) 7019509 unverified Morgan McGuire Morgan McGuire winglian commited on Aug 12, 2023
Merge pull request #92 from OpenAccess-AI-Collective/flash-optimum 16bb627 unverified winglian commited on Jun 14, 2023
Merge pull request #193 from OpenAccess-AI-Collective/config-fixes-20230612 94f310c unverified winglian commited on Jun 12, 2023
Merge pull request #132 from utensil/falcon-7b-qlora c8242de unverified Nanobit commited on Jun 8, 2023
Default `wandb_project` to empty as suggested a52f481 unverified utensil Nanobit commited on Jun 8, 2023
Add comments/alternatives for falcon-qlora configs ca11ae9 unverified utensil commited on Jun 3, 2023
swap batch size for gradient accumulation steps to decouple from num gpu c2a0792 winglian commited on May 31, 2023
Merge pull request #105 from viktoriussuwandi/viktoriussuwandi-patch 4df9da7 unverified winglian commited on May 30, 2023
Merge pull request #106 from fearnworks/qlora-openllama-3b-example 2531ea2 unverified winglian commited on May 30, 2023
Update examples/qlora-openllama-3b/README.md 6cee881 unverified jphillips winglian commited on May 30, 2023
Update wandb_log_model on config-3b.yml 4eb68ac unverified Viktorius Suwandi commited on May 29, 2023
Merge branch 'main' into refactor/rename-4b-to-gptq 147241c unverified winglian commited on May 27, 2023