strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed (#1428) 2a1589f unverified winglian commited on Mar 21
HF / FEAT: Optimize HF tags (#1425) [skip ci] 7d55607 unverified Younes Belkada winglian commited on Mar 21
support galore once upstreamed into transformers (#1409) dd449c5 unverified winglian commited on Mar 19
Add a config not to shuffle merged dataset (#1394) [skip ci] 43bdc5d unverified seungduk winglian commited on Mar 19
fix(config): passing gradient_checkpoint_kwargs (#1412) b1e3e1b unverified Nanobit commited on Mar 19
Train parameters exclusively in specific ranges (#1390) 05bcc9e unverified seungduk commited on Mar 14
Don't disable existing loggers when configuring axolotl logging (#1395) 3bd8203 unverified chiragjn commited on Mar 14
Update ChatTemplate enum to include alpaca and gemma (#1396) 0976781 unverified chiragjn commited on Mar 13
Add Glaive conversation format support (#1365) b7d8a7d unverified Brian Fitzgerald winglian commited on Mar 11
Fix pydantic configuration for the max_memory input (#1385) [skip ci] 0bc114d unverified dandm1 winglian commited on Mar 11
allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361) 2598c9f unverified winglian commited on Mar 5
plain input/output prompt strategy w/o chat templates (#1346) 4d09b42 unverified winglian commited on Mar 4
Update fastchat_conversation_turns.py (#1294) [skip ci] 2b9687f unverified eltociear commited on Feb 27
Support user-defined prompt processing strategies for dpo (#1248) 1e3d530 unverified nopperl winglian commited on Feb 26
hotfix to exclude_unset from pydantic config when converting back to a dict (#1334) 269c543 unverified winglian commited on Feb 26
ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci] d756534 unverified JohanWork Nanobit winglian commited on Feb 26
Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291) 3c00f40 unverified David Meikle commited on Feb 21
Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273) 8430db2 unverified jinwonkim93 commited on Feb 13
allow the optimizer prune ratio for ReLoRA to be configurable (#1287) 4b997c3 unverified winglian commited on Feb 12