Commit History
Fix falcon tokenization step (#1441) [skip ci]
bcdc9b1
unverified
make sure to capture non-null defaults from config validation (#1415)
601b77b
unverified
winglian
commited on
fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path (#1298)
ff939d8
unverified
Nanobit
commited on
Fix ORPO multi gpu (#1433)
34ba634
unverified
winglian
commited on
strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed (#1428)
2a1589f
unverified
winglian
commited on
HF / FEAT: Optimize HF tags (#1425) [skip ci]
7d55607
unverified
fixes for dpo and orpo template loading (#1424)
7803f09
unverified
winglian
commited on
support galore once upstreamed into transformers (#1409)
dd449c5
unverified
winglian
commited on
Feat: Add sharegpt multirole (#1137)
40a88e8
unverified
Nanobit
commited on
fix(config): passing gradient_checkpoint_kwargs (#1412)
b1e3e1b
unverified
Nanobit
commited on
ORPO (#1419)
2ea70eb
unverified
winglian
commited on
chore(script): remove redundant setting (#1411)
d485a08
unverified
Nanobit
commited on
beta support for multipack with gemmoe: (#1402)
8df7b88
unverified
winglian
commited on
Train parameters exclusively in specific ranges (#1390)
05bcc9e
unverified
seungduk
commited on
Don't disable existing loggers when configuring axolotl logging (#1395)
3bd8203
unverified
chiragjn
commited on
Update ChatTemplate enum to include alpaca and gemma (#1396)
0976781
unverified
chiragjn
commited on
add handling for argilla dpo-mix (#1397)
8a82d2e
unverified
winglian
commited on
chore: lint (#1389)
4326520
unverified
winglian
commited on
Add Glaive conversation format support (#1365)
b7d8a7d
unverified
support for rslora (#1387) [skip ci]
7659c00
unverified
winglian
commited on
validation for fsdp and deepspeed (#1388) [skip ci]
3fd8093
unverified
winglian
commited on
FDSP + QLoRA (#1378)
9b6ee83
unverified
winglian
commited on
support for DoRA w/ PEFT (#1363)
0cfdb2c
unverified
winglian
commited on
allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361)
2598c9f
unverified
winglian
commited on
lora+ support (#1352)
decb66e
unverified
winglian
commited on
plain input/output prompt strategy w/o chat templates (#1346)
4d09b42
unverified
winglian
commited on
Fix validation for early stopping (#1358)
b5b4492
unverified
chiragjn
commited on
fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified
winglian
commited on
Fix `use_mlflow` to be bool instead of str (#1344)
3a5a2d2
unverified
chiragjn
commited on
more fixes 20240228 (#1342) [skip ci]
0f985e1
unverified
winglian
commited on
add gemma instruct chat template (#1341)
c1a7b3d
unverified
winglian
commited on
Update fastchat_conversation_turns.py (#1294) [skip ci]
2b9687f
unverified
eltociear
commited on
fix steps check for anneal on first cycle (#1316)
2c9c88b
unverified
winglian
commited on
more pydantic fixes (#1338)
3f69571
unverified
winglian
commited on
Support user-defined prompt processing strategies for dpo (#1248)
1e3d530
unverified
add lion-pytorch optimizer (#1299) [skip ci]
1648279
unverified
hotfix to exclude_unset from pydantic config when converting back to a dict (#1334)
269c543
unverified
winglian
commited on
hotfix for missing outputs params (#1333)
e7eed20
unverified
winglian
commited on
hotfix for lora rank (#1332)
cf00231
unverified
winglian
commited on
hotfix for capabilities loading (#1331)
7de912e
unverified
winglian
commited on
Pydantic 2.x cfg (#1239)
cc3cebf
unverified
winglian
commited on
make mlflow optional (#1317)
5894f0e
unverified
winglian
commited on
multipack for gemma (#1313)
2752d5f
unverified
winglian
commited on
Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291)
3c00f40
unverified
David Meikle
commited on
Validation always happens on first step (#1300)
e2786cc
unverified
LeonardoEmili
commited on