Commit History

ORPO (#1419)
2ea70eb
unverified

winglian commited on

Update README.md (#1418)
e8c8ea6
unverified

jbl commited on

chore(script): remove redundant setting (#1411)
d485a08
unverified

Nanobit commited on

Fix(readme): Improve README QuickStart info (#1408)
f083aed
unverified

Nanobit commited on

Feat(readme): Add instructions for Google GPU VM instances (#1410)
868c339
unverified

Nanobit commited on

beta support for multipack with gemmoe: (#1402)
8df7b88
unverified

winglian commited on

Fix Gemma 7b qlora.yml (#1405)
6366b0c
unverified

rasbt commited on

Train parameters exclusively in specific ranges (#1390)
05bcc9e
unverified

seungduk commited on

Don't disable existing loggers when configuring axolotl logging (#1395)
3bd8203
unverified

chiragjn commited on

Add QLoRA + FSDP Docs (#1403)
8b12468
unverified

hamel commited on

Update ChatTemplate enum to include alpaca and gemma (#1396)
0976781
unverified

chiragjn commited on

add handling for argilla dpo-mix (#1397)
8a82d2e
unverified

winglian commited on

chore: lint (#1389)
4326520
unverified

winglian commited on

Add Glaive conversation format support (#1365)
b7d8a7d
unverified

Brian Fitzgerald winglian commited on

Set `gradient_clipping` to `auto` in DeepSpeed configs (#1382) [skip ci]
b0ee9ec
unverified

seungduk commited on

Fix pydantic configuration for the max_memory input (#1385) [skip ci]
0bc114d
unverified

dandm1 winglian commited on

support for rslora (#1387) [skip ci]
7659c00
unverified

winglian commited on

validation for fsdp and deepspeed (#1388) [skip ci]
3fd8093
unverified

winglian commited on

FDSP + QLoRA (#1378)
9b6ee83
unverified

winglian commited on

JarvisLabs (#1372)
638c2da
unverified

winglian commited on

update flash attention for gemma support: (#1368)
58b0d4b
unverified

winglian commited on

add docs for `input_output` format (#1367) [skip ci]
ed70a08
unverified

hamel commited on

support for DoRA w/ PEFT (#1363)
0cfdb2c
unverified

winglian commited on

Remove unsupported python version 3.9 from README (#1364) [skip ci]
3765747
unverified

Nicolas Rojas commited on

Update tinyllama lora.yml to fix eval packing issue (#1362)
8984bf1
unverified

rasbt commited on

allow the sharegpt handler to also better handle datasets destined for openai finetuning (#1361)
2598c9f
unverified

winglian commited on

lora+ support (#1352)
decb66e
unverified

winglian commited on

plain input/output prompt strategy w/o chat templates (#1346)
4d09b42
unverified

winglian commited on

Fix validation for early stopping (#1358)
b5b4492
unverified

chiragjn commited on

chore: enable sample_packing for Gemma (#1351)
170d4d7
unverified

Nanobit commited on

run tests again on Modal (#1289) [skip ci]
0001862
unverified

winglian commited on

fix for protected model_ namespace w pydantic (#1345)
6b3b271
unverified

winglian commited on

Fix `use_mlflow` to be bool instead of str (#1344)
3a5a2d2
unverified

chiragjn commited on

deprecate py 3.9 support, set min pytorch version (#1343) [skip ci]
6d4bbb8
unverified

winglian commited on

more fixes 20240228 (#1342) [skip ci]
0f985e1
unverified

winglian commited on

add gemma instruct chat template (#1341)
c1a7b3d
unverified

winglian commited on

Update fastchat_conversation_turns.py (#1294) [skip ci]
2b9687f
unverified

eltociear commited on

fix steps check for anneal on first cycle (#1316)
2c9c88b
unverified

winglian commited on

Update debugging.md (#1339) [skip ci]
5265cd6
unverified

hamel commited on

fix: checkpoint saving with deepspeed (#1321)
5be8b55
unverified

Nanobit commited on

Mps mistral lora (#1292) [skip ci]
0f6af36
unverified

Maxime Nanobit winglian commited on

more pydantic fixes (#1338)
3f69571
unverified

winglian commited on

Support user-defined prompt processing strategies for dpo (#1248)
1e3d530
unverified

nopperl winglian commited on

add lion-pytorch optimizer (#1299) [skip ci]
1648279
unverified

Maxime winglian commited on

Add StableLM 2 Example Scripts (#1327) [skip ci]
f30d062
unverified

ncoop57 commited on

hotfix to exclude_unset from pydantic config when converting back to a dict (#1334)
269c543
unverified

winglian commited on

hotfix for missing outputs params (#1333)
e7eed20
unverified

winglian commited on

hotfix for lora rank (#1332)
cf00231
unverified

winglian commited on

hotfix for capabilities loading (#1331)
7de912e
unverified

winglian commited on