Don't disable existing loggers when configuring axolotl logging (#1395) 3bd8203 unverified chiragjn commited on Mar 14
Update ChatTemplate enum to include alpaca and gemma (#1396) 0976781 unverified chiragjn commited on Mar 13
Add Glaive conversation format support (#1365) b7d8a7d unverified Brian Fitzgerald winglian commited on Mar 11
Fix pydantic configuration for the max_memory input (#1385) [skip ci] 0bc114d unverified dandm1 winglian commited on Mar 11
allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361) 2598c9f unverified winglian commited on Mar 5
plain input/output prompt strategy w/o chat templates (#1346) 4d09b42 unverified winglian commited on Mar 4
Update fastchat_conversation_turns.py (#1294) [skip ci] 2b9687f unverified eltociear commited on Feb 27
Support user-defined prompt processing strategies for dpo (#1248) 1e3d530 unverified nopperl winglian commited on Feb 26
hotfix to exclude_unset from pydantic config when converting back to a dict (#1334) 269c543 unverified winglian commited on Feb 26
ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci] d756534 unverified JohanWork Nanobit winglian commited on Feb 26
Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291) 3c00f40 unverified David Meikle commited on Feb 21
Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273) 8430db2 unverified jinwonkim93 commited on Feb 13
allow the optimizer prune ratio for ReLoRA to be configurable (#1287) 4b997c3 unverified winglian commited on Feb 12
simplify haldning for newer multipack patches so they can be added in a single place (#1270) 5698943 unverified winglian commited on Feb 7
Fix bug preventing model_kwargs being injected (#1262) 73f1bda unverified Zac Brannelly commited on Feb 7
fix(model): apply gate fp32 only for mixtral (#1241) 2d65f47 unverified Nanobit winglian commited on Feb 1