refactor scripts/finetune.py into new cli modules (#550) 861ceca unverified winglian Nanobit commited on Sep 15, 2023
check for the existence of the default accelerate config that can create headaches (#561) fdb777b unverified winglian commited on Sep 13, 2023
move is_llama_derived_model into normalize_config (#524) 44454ae unverified tmm1 commited on Sep 4, 2023
Debug tokenization output: Add ability to output text only (no tokens), and/or specify num samples to see (#511) 48434be unverified Tom Jobbins commited on Aug 31, 2023
tweak: use default config file when only one file is present (#501) 36b2e1c unverified Maxime commited on Aug 29, 2023
fix: inference did not move the model to the correct device (#483) 17605b8 unverified Maxime commited on Aug 26, 2023
ReLoRA implementation (with quantization) (#322) bde3c5a unverified chargoddard winglian commited on Aug 24, 2023
feat(docs): improve user customized prompts (#443) 04a42b6 unverified Nanobit commited on Aug 21, 2023
use context manager to run things on rank0 before others (#397) fc2d6be unverified winglian commited on Aug 15, 2023
Attention mask and position id fixes for packing (#285) 2bb0b78 unverified winglian commited on Aug 12, 2023
Fixed pre-commit problems, fixed small bug in logging_config to handle LOG_LEVEL env var b1f4f7a theobjectivedad commited on Jul 15, 2023
Merge pull request #92 from OpenAccess-AI-Collective/flash-optimum 16bb627 unverified winglian commited on Jun 14, 2023
Merge pull request #177 from NanoCode012/fix/landmark-patch 8002ffb unverified winglian commited on Jun 12, 2023
add flash attn context for efficient training and attempt setting model to train mode: 8792199 winglian commited on May 27, 2023