fix(config): passing gradient_checkpoint_kwargs (#1412) b1e3e1b unverified Nanobit commited on Mar 19, 2024
Train parameters exclusively in specific ranges (#1390) 05bcc9e unverified seungduk commited on Mar 14, 2024
Update ChatTemplate enum to include alpaca and gemma (#1396) 0976781 unverified chiragjn commited on Mar 13, 2024
Add Glaive conversation format support (#1365) b7d8a7d unverified Brian Fitzgerald winglian commited on Mar 11, 2024
Fix pydantic configuration for the max_memory input (#1385) [skip ci] 0bc114d unverified dandm1 winglian commited on Mar 11, 2024
validation for fsdp and deepspeed (#1388) [skip ci] 3fd8093 unverified winglian commited on Mar 11, 2024
fix for protected model_ namespace w pydantic (#1345) 6b3b271 unverified winglian commited on Feb 28, 2024
Fix `use_mlflow` to be bool instead of str (#1344) 3a5a2d2 unverified chiragjn commited on Feb 28, 2024
Support user-defined prompt processing strategies for dpo (#1248) 1e3d530 unverified nopperl winglian commited on Feb 26, 2024
add lion-pytorch optimizer (#1299) [skip ci] 1648279 unverified Maxime winglian commited on Feb 26, 2024
hotfix to exclude_unset from pydantic config when converting back to a dict (#1334) 269c543 unverified winglian commited on Feb 26, 2024
ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci] d756534 unverified JohanWork Nanobit winglian commited on Feb 26, 2024
Validation always happens on first step (#1300) e2786cc unverified LeonardoEmili commited on Feb 21, 2024
Add seq2seq eval benchmark callback (#1274) 5a5d474 unverified LeonardoEmili commited on Feb 13, 2024
Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273) 8430db2 unverified jinwonkim93 commited on Feb 13, 2024
simplify haldning for newer multipack patches so they can be added in a single place (#1270) 5698943 unverified winglian commited on Feb 7, 2024
Fix bug preventing model_kwargs being injected (#1262) 73f1bda unverified Zac Brannelly commited on Feb 7, 2024
relora: magnitude pruning of the optimizer (#1245) 8c2e05a unverified winglian commited on Feb 6, 2024
fix(model): apply gate fp32 only for mixtral (#1241) 2d65f47 unverified Nanobit winglian commited on Feb 1, 2024
Support for additional_special_tokens (#1221) [skip ci] 25e037f unverified DreamGenX winglian commited on Jan 31, 2024
Fix and document test_datasets (#1228) 5787e1a unverified DreamGenX winglian commited on Jan 31, 2024
Revert "run PR e2e docker CI tests in Modal" (#1220) [skip ci] 8da1633 unverified winglian commited on Jan 26, 2024
run PR e2e docker CI tests in Modal (#1217) [skip ci] 36d053f unverified winglian commited on Jan 26, 2024
ADD: warning if hub_model_id ist set but not any save strategy (#1202) af29d81 unverified JohanWork winglian commited on Jan 26, 2024
more checks and fixes for deepspeed and fsdp (#1208) [skip ci] e923e62 unverified winglian commited on Jan 26, 2024
make sure to register the base chatml template even if no system message is provided (#1207) badda37 unverified winglian commited on Jan 25, 2024