improve how we setup eval/save strategies and steps (#547) 36e53c7 unverified winglian commited on Sep 13, 2023
gracefully handle length feature used for group by (#565) e7aa7b1 unverified winglian commited on Sep 13, 2023
Add training callback to send predictions to WandB table (#521) 5b67ea9 unverified Glavin001 commited on Sep 13, 2023
Fix pretraining with iterable/streaming Dataset (#556) 2f586d1 unverified Jan Philipp Harries Jan Philipp Harries commited on Sep 13, 2023
recommend padding when using sample packing (#531) 3437149 unverified winglian commited on Sep 6, 2023
Add support for GPTQ using native transformers/peft (#468) 3355706 unverified winglian commited on Sep 5, 2023
Merge pull request #520 from bdashore3/sharegpt-fixes daa4fac unverified mhenrichsen commited on Sep 5, 2023
move is_llama_derived_model into normalize_config (#524) 44454ae unverified tmm1 commited on Sep 4, 2023
Debug tokenization output: Add ability to output text only (no tokens), and/or specify num samples to see (#511) 48434be unverified Tom Jobbins commited on Aug 31, 2023
Added advanced DDP args (#515) 396a7a7 unverified Jan Philipp Harries Jan Philipp Harries commited on Aug 31, 2023
Changed Bench Eval to report metrics correctly by split. Added total accuracy and renamed previously used bench_accuracy to bench_average_accuracy. (#512) 42f9642 unverified Alpay Ariyak commited on Aug 31, 2023
pad_to_worst_case_seq_len boolean, for testing memory limits (#498) 8e197f6 unverified Birch-san tmm1 commited on Aug 28, 2023
fsdp requires params be the same type too (#493) 98bf76e unverified winglian commited on Aug 28, 2023
Fix(tokenizer): Make sure to add pad for CodeLlamaTokenizer (#489) 4c37bd0 unverified Nanobit commited on Aug 28, 2023
fix: finetune model inference needs the dtype fix to work with flash-attn f311df9 unverified Maxime commited on Aug 26, 2023
Fix(tokenizer): Fix condition to add pad token (#477) 71bd062 unverified Nanobit commited on Aug 25, 2023
ReLoRA implementation (with quantization) (#322) bde3c5a unverified chargoddard winglian commited on Aug 24, 2023
workaround so training doesn't hang when packed dataloader batches aren't even (#461) c69faee unverified winglian commited on Aug 23, 2023