Commit History

fix pretraining_ on odd datasets (#1463)
586bd8d
unverified

monsoon-nlp commited on

Reorganize Docs (#1468)
86b7d22
unverified

hamel commited on

reduce verbosity of the special tokens (#1472)
0b10377
unverified

winglian commited on

feat: add deepspeed 3 with cpuoffload (#1466)
946b497
unverified

Nanobit winglian commited on

make sure to install causal_conv1d in docker (#1459)
89134f2
unverified

winglian commited on

qwen2_moe support w multipack (#1455)
6086be8
unverified

winglian commited on

Nightlies fix v4 (#1458) [skip ci]
4a92a3b
unverified

winglian commited on

fix yaml parsing for workflow (#1457) [skip ci]
46a73e3
unverified

winglian commited on

fix how nightly tag is generated (#1456) [skip ci]
da3415b
unverified

winglian commited on

configure nightly docker builds (#1454) [skip ci]
8cb127a
unverified

winglian commited on

fix some of the edge cases for Jamba (#1452)
05b398a
unverified

winglian commited on

Support loading datasets saved via save_to_disk (#1432)
e634118
unverified

fozziethebeat commited on

Jamba (#1451)
02af082
unverified

winglian commited on

fix layer_replication arg to peft (#1446)
4155e99
unverified

winglian commited on

support layer replication for peft and fix rslora integration (#1445)
25afd35
unverified

winglian commited on

fix for accelerate env var for auto bf16, add new base image and expand torch_cuda_arch_list support (#1413)
da265dd
unverified

winglian commited on

Remove seq_len arg in rotary_emb (#1443)
e07347b
unverified

wenbopan winglian commited on

Fix falcon tokenization step (#1441) [skip ci]
bcdc9b1
unverified

Far El winglian commited on

turn sample_packing on for training (#1438) [skip ci]
c19d060
unverified

satpalsr commited on

make sure to capture non-null defaults from config validation (#1415)
601b77b
unverified

winglian commited on

fix(dataset): normalize tokenizer config and change hash from tokenizer class to tokenizer path (#1298)
ff939d8
unverified

Nanobit commited on

docs: update link to docs of advance topic in README.md (#1437)
324d59e
unverified

pphuc25 commited on

chore(config): refactor old mistral config (#1435)
f1ebaa0
unverified

Nanobit commited on

Fix ORPO multi gpu (#1433)
34ba634
unverified

winglian commited on

Update docs.yml
4e69aa4
unverified

hamel commited on

Bootstrap Hosted Axolotl Docs w/Quarto (#1429)
629450c
unverified

hamel commited on

strip out hacky qlora-fsdp workarounds now that qlora-fsdp fixes are upstreamed (#1428)
2a1589f
unverified

winglian commited on

HF / FEAT: Optimize HF tags (#1425) [skip ci]
7d55607
unverified

Younes Belkada winglian commited on

fixes for dpo and orpo template loading (#1424)
7803f09
unverified

winglian commited on

support galore once upstreamed into transformers (#1409)
dd449c5
unverified

winglian commited on

Feat: Add sharegpt multirole (#1137)
40a88e8
unverified

Nanobit commited on

Add a config not to shuffle merged dataset (#1394) [skip ci]
43bdc5d
unverified

seungduk winglian commited on

fix(config): passing gradient_checkpoint_kwargs (#1412)
b1e3e1b
unverified

Nanobit commited on

ORPO (#1419)
2ea70eb
unverified

winglian commited on

Update README.md (#1418)
e8c8ea6
unverified

jbl commited on

chore(script): remove redundant setting (#1411)
d485a08
unverified

Nanobit commited on

Fix(readme): Improve README QuickStart info (#1408)
f083aed
unverified

Nanobit commited on

Feat(readme): Add instructions for Google GPU VM instances (#1410)
868c339
unverified

Nanobit commited on

beta support for multipack with gemmoe: (#1402)
8df7b88
unverified

winglian commited on

Fix Gemma 7b qlora.yml (#1405)
6366b0c
unverified

rasbt commited on

Train parameters exclusively in specific ranges (#1390)
05bcc9e
unverified

seungduk commited on

Don't disable existing loggers when configuring axolotl logging (#1395)
3bd8203
unverified

chiragjn commited on

Add QLoRA + FSDP Docs (#1403)
8b12468
unverified

hamel commited on

Update ChatTemplate enum to include alpaca and gemma (#1396)
0976781
unverified

chiragjn commited on

add handling for argilla dpo-mix (#1397)
8a82d2e
unverified

winglian commited on

chore: lint (#1389)
4326520
unverified

winglian commited on

Add Glaive conversation format support (#1365)
b7d8a7d
unverified

Brian Fitzgerald winglian commited on

Set `gradient_clipping` to `auto` in DeepSpeed configs (#1382) [skip ci]
b0ee9ec
unverified

seungduk commited on

Fix pydantic configuration for the max_memory input (#1385) [skip ci]
0bc114d
unverified

dandm1 winglian commited on