qwerrwe / src

Commit History

Don't disable existing loggers when configuring axolotl logging (#1395)
3bd8203
unverified

chiragjn commited on

Update ChatTemplate enum to include alpaca and gemma (#1396)
0976781
unverified

chiragjn commited on

add handling for argilla dpo-mix (#1397)
8a82d2e
unverified

winglian commited on

chore: lint (#1389)
4326520
unverified

winglian commited on

Add Glaive conversation format support (#1365)
b7d8a7d
unverified

Brian Fitzgerald winglian commited on

Fix pydantic configuration for the max_memory input (#1385) [skip ci]
0bc114d
unverified

dandm1 winglian commited on

support for rslora (#1387) [skip ci]
7659c00
unverified

winglian commited on

validation for fsdp and deepspeed (#1388) [skip ci]
3fd8093
unverified

winglian commited on

FDSP + QLoRA (#1378)
9b6ee83
unverified

winglian commited on

support for DoRA w/ PEFT (#1363)
0cfdb2c
unverified

winglian commited on

allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361)
2598c9f
unverified

winglian commited on

lora+ support (#1352)
decb66e
unverified

winglian commited on

plain input/output prompt strategy w/o chat templates (#1346)
4d09b42
unverified

winglian commited on

Fix validation for early stopping (#1358)
b5b4492
unverified

chiragjn commited on

fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified

winglian commited on

Fix `use_mlflow` to be bool instead of str (#1344)
3a5a2d2
unverified

chiragjn commited on

more fixes 20240228 (#1342) [skip ci]
0f985e1
unverified

winglian commited on

add gemma instruct chat template (#1341)
c1a7b3d
unverified

winglian commited on

Update fastchat_conversation_turns.py (#1294) [skip ci]
2b9687f
unverified

eltociear commited on

fix steps check for anneal on first cycle (#1316)
2c9c88b
unverified

winglian commited on

more pydantic fixes (#1338)
3f69571
unverified

winglian commited on

Support user-defined prompt processing strategies for dpo (#1248)
1e3d530
unverified

nopperl winglian commited on

add lion-pytorch optimizer (#1299) [skip ci]
1648279
unverified

Maxime winglian commited on

hotfix to exclude_unset from pydantic config when converting back to a dict (#1334)
269c543
unverified

winglian commited on

hotfix for missing outputs params (#1333)
e7eed20
unverified

winglian commited on

hotfix for lora rank (#1332)
cf00231
unverified

winglian commited on

hotfix for capabilities loading (#1331)
7de912e
unverified

winglian commited on

ADD: push checkpoints to mlflow artifact registry (#1295) [skip ci]
d756534
unverified

JohanWork Nanobit winglian commited on

Pydantic 2.x cfg (#1239)
cc3cebf
unverified

winglian commited on

make mlflow optional (#1317)
5894f0e
unverified

winglian commited on

multipack for gemma (#1313)
2752d5f
unverified

winglian commited on

Allow load_best_model_at_end to be configured for early stopping on custom evaluation datasets (#1291)
3c00f40
unverified

David Meikle commited on

Validation always happens on first step (#1300)
e2786cc
unverified

LeonardoEmili commited on

Add seq2seq eval benchmark callback (#1274)
5a5d474
unverified

LeonardoEmili commited on

Scheduler implementation of Continual Pre-Training of Large Language Models: How to (re)warm your model? (#1273)
8430db2
unverified

jinwonkim93 commited on

allow the optimizer prune ratio for ReLoRA to be configurable (#1287)
4b997c3
unverified

winglian commited on

Add MPS support (#1264)
fac2d98
unverified

Maxime winglian commited on

don't use load and push together (#1284)
ea00dd0
unverified

winglian commited on

add support for https remote yamls (#1277)
9bca7db
unverified

hamel commited on

allow remote data paths (#1278)
91cf4ee
unverified

hamel commited on

simplify haldning for newer multipack patches so they can be added in a single place (#1270)
5698943
unverified

winglian commited on

Fix bug preventing model_kwargs being injected (#1262)
73f1bda
unverified

Zac Brannelly commited on

Add more save strategies for DPO training. (#1255)
13eea21
unverified

Philip May commited on

Fix typo `bloat16` -> `bfloat16` (#1257)
1072f28
unverified

chiragjn commited on

Pretrain transforms (#1261)
c7cf381
unverified

winglian commited on

relora: magnitude pruning of the optimizer (#1245)
8c2e05a
unverified

winglian commited on

fix(model): apply gate fp32 only for mixtral (#1241)
2d65f47
unverified

Nanobit winglian commited on

support for true batches with multipack (#1230)
00568c1
unverified

winglian commited on

Peft deepspeed resume (#1227)
c67fb71
unverified

winglian commited on