Commit History

add qwen2-72b fsdp example (#1696)
00ac302
unverified

winglian commited on

ensure explicit eval_sample_packing to avoid mismatch issues (#1692)
9c1af1a
unverified

winglian commited on

Create phi3-ft-fsdp.yml (#1580)
a82a711
unverified

aaditya commited on

Phi-3 conversation format, example training script and perplexity metric (#1582)
cf64284
unverified

roborovski winglian commited on

add support for rpo_alpha (#1681)
c996881
unverified

winglian commited on

re-enable DPO for tests in modal ci (#1374)
1f151c0
unverified

winglian commited on

Fix the broken link in README (#1678) [skip ci]
5cde065
unverified

saeedesmaili commited on

need to add back drop_last for sampler (#1676)
05b0bd0
unverified

winglian commited on

cleanup the deepspeed proxy model at the end of training (#1675)
d4f6c65
unverified

winglian commited on

load explicit splits on datasets (#1652)
a944f7b
unverified

winglian commited on

set chat_template in datasets config automatically (#1664)
9d4225a
unverified

winglian commited on

use mixins for orpo and kto configs so they work with axolotl customizations (#1674)
f7332ac
unverified

winglian commited on

re-enable phi for tests in modal ci (#1373)
16d46b7
unverified

winglian commited on

revert multipack batch sampler changes (#1672)
a6b37bd
unverified

winglian commited on

handle the system role too for chat templates (#1671)
b752080
unverified

winglian commited on

make sure the CI fails when pytest script fails (#1669)
fe650dd
unverified

winglian commited on

Fix README quick start example usage model dirs (#1668)
49b967b
unverified

Abe Voelker commited on

Correct name of MixtralBlockSparseTop2MLP (L -> l) (#1667)
65db903
unverified

seungduk commited on

Fix: ensure correct handling of `val_set_size` as `float` or `int` (#1655)
6a5a725
unverified

Davide Caroselli winglian commited on

fix lint issue that snuck through (#1665)
f5febc7
unverified

winglian commited on

Fix Lora config error for Llama3 (#1659)
230e0ac
unverified

oaishi commited on

Generalizing the chat_template prompt strategy (#1660) [skip ci]
cc11c6b
unverified

fozziethebeat commited on

Fix Google Colab notebook 2024-05 (#1662) [skip ci]
5f91064
unverified

Maciek commited on

update deps (#1663) [skip ci]
ef22351
unverified

winglian commited on

document how to use `share_strategy="no"` (#1653) [skip ci]
8a20a7b
unverified

charlesfrye commited on

Switch to parallel FFD bin packing algorithm. (#1619)
367b2e8
unverified

winglian daaave commited on

support for custom messages field in sharegpt (#1651)
bbfed31
unverified

winglian commited on

Update tiny-llama qlora.yml addressing eval packing error (#1638)
84bb806
unverified

Jaydeep Thik commited on

enable loraplus setting for dpo trainer (#1646)
a27d5e1
unverified

thepowerfuldeez commited on

allow report_to for multiple providers (#1647)
6299eb5
unverified

winglian commited on

Fix llama3 chat_template (extra <|eot_id|> on last turn) (#1635)
7c2bf30
unverified

leonardlin winglian commited on

fixes to save on fractional save_steps (#1643)
ba45531
unverified

winglian commited on

Unsloth optims for Llama (#1609)
8a1572a
unverified

winglian commited on

add save_only_model option (#1634)
702a669
unverified

emozilla commited on

fix ray install (#1630)
891ae8a
unverified

winglian commited on

more fixes to work with runpod + skypilot (#1629)
0c49ecc
unverified

winglian commited on

cloud image w/o tmux (#1628)
6011343
unverified

winglian commited on

install rsync too (#1627)
419b2a6
unverified

winglian commited on

fix setting the authorized keys when there are more than one in the env var (#1626)
2501a37
unverified

winglian commited on

fix symlinks for axolotl outputs (#1625)
e6937e8
unverified

winglian commited on

bump versions of deps (#1621)
039e2a0
unverified

winglian commited on

update outputs path so that we can mount workspace to /workspace/data (#1623)
4fde300
unverified

winglian commited on

update torch 2.2.1 -> 2.2.2 (#1622)
3319780
unverified

winglian commited on

Fix `total_num_steps` (#1566)
81da7d2
unverified

bofenghuang commited on

FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584)
1e1921b
unverified

alimosavian Ali Mosavian winglian commited on

make sure to save on the last step (#1615)
1634ac8
unverified

winglian commited on

fix attention mask collation (#1603)
0298273
unverified

winglian commited on

add dstack section (#1612) [skip ci]
5d97e65
unverified

chansung winglian commited on