move is_llama_derived_model into normalize_config (#524) 44454ae unverified tmm1 commited on Sep 4, 2023
Debug tokenization output: Add ability to output text only (no tokens), and/or specify num samples to see (#511) 48434be unverified Tom Jobbins commited on Aug 31, 2023
Added advanced DDP args (#515) 396a7a7 unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 31, 2023
Changed Bench Eval to report metrics correctly by split. Added total accuracy and renamed previously used bench_accuracy to bench_average_accuracy. (#512) 42f9642 unverified Alpay Ariyak commited on Aug 31, 2023
pad_to_worst_case_seq_len boolean, for testing memory limits (#498) 8e197f6 unverified Birch-san tmm1 commited on Aug 28, 2023
fsdp requires params be the same type too (#493) 98bf76e unverified winglian commited on Aug 28, 2023
Fix(tokenizer): Make sure to add pad for CodeLlamaTokenizer (#489) 4c37bd0 unverified Nanobit commited on Aug 28, 2023
fix: finetune model inference needs the dtype fix to work with flash-attn f311df9 unverified Maxime commited on Aug 26, 2023
Fix(tokenizer): Fix condition to add pad token (#477) 71bd062 unverified Nanobit commited on Aug 25, 2023
ReLoRA implementation (with quantization) (#322) bde3c5a unverified chargoddard winglian commited on Aug 24, 2023
workaround so training doesn't hang when packed dataloader batches aren't even (#461) c69faee unverified winglian commited on Aug 23, 2023
feat: add Metharme prompt strategy (#446) f474650 unverified TearGosling Nanobit commited on Aug 22, 2023
recast loralayer, norm, lmhead + embed token weights per original qlora (#393) 96deb6b unverified winglian commited on Aug 21, 2023
fix eval regression caused in 13f7efaf74fcd3c4514277ccb71914c589873f6a a213d99 tmm1 commited on Aug 21, 2023
support user defined prompters, pretokenized datasets in config, local parquet, local arrow files (#348) d2e7f27 unverified winglian commited on Aug 20, 2023
use save_strategy from config if available (#434) b3f5e00 unverified winglian commited on Aug 19, 2023