qwerrwe / src

Commit History

drop length column for issues with eval without packing (#1711)
3f1f5e3
unverified

winglian commited on

download model weights on preprocess step (#1693)
5783839
unverified

winglian commited on

verbose failure message (#1694)
cbbf039
unverified

winglian commited on

fix for when sample_packing and eval_sample_packing are different (#1695)
18cabc0
unverified

winglian commited on

add back packing efficiency estimate so epochs and multi-gpu works properly (#1697)
ed8ef65
unverified

winglian commited on

ensure explicit eval_sample_packing to avoid mismatch issues (#1692)
9c1af1a
unverified

winglian commited on

Phi-3 conversation format, example training script and perplexity metric (#1582)
cf64284
unverified

roborovski winglian commited on

add support for rpo_alpha (#1681)
c996881
unverified

winglian commited on

re-enable DPO for tests in modal ci (#1374)
1f151c0
unverified

winglian commited on

need to add back drop_last for sampler (#1676)
05b0bd0
unverified

winglian commited on

cleanup the deepspeed proxy model at the end of training (#1675)
d4f6c65
unverified

winglian commited on

load explicit splits on datasets (#1652)
a944f7b
unverified

winglian commited on

set chat_template in datasets config automatically (#1664)
9d4225a
unverified

winglian commited on

use mixins for orpo and kto configs so they work with axolotl customizations (#1674)
f7332ac
unverified

winglian commited on

revert multipack batch sampler changes (#1672)
a6b37bd
unverified

winglian commited on

handle the system role too for chat templates (#1671)
b752080
unverified

winglian commited on

make sure the CI fails when pytest script fails (#1669)
fe650dd
unverified

winglian commited on

Correct name of MixtralBlockSparseTop2MLP (L -> l) (#1667)
65db903
unverified

seungduk commited on

Fix: ensure correct handling of `val_set_size` as `float` or `int` (#1655)
6a5a725
unverified

Davide Caroselli winglian commited on

Generalizing the chat_template prompt strategy (#1660) [skip ci]
cc11c6b
unverified

fozziethebeat commited on

Switch to parallel FFD bin packing algorithm. (#1619)
367b2e8
unverified

winglian daaave commited on

support for custom messages field in sharegpt (#1651)
bbfed31
unverified

winglian commited on

enable loraplus setting for dpo trainer (#1646)
a27d5e1
unverified

thepowerfuldeez commited on

allow report_to for multiple providers (#1647)
6299eb5
unverified

winglian commited on

Fix llama3 chat_template (extra <|eot_id|> on last turn) (#1635)
7c2bf30
unverified

leonardlin winglian commited on

fixes to save on fractional save_steps (#1643)
ba45531
unverified

winglian commited on

Unsloth optims for Llama (#1609)
8a1572a
unverified

winglian commited on

add save_only_model option (#1634)
702a669
unverified

emozilla commited on

Fix `total_num_steps` (#1566)
81da7d2
unverified

bofenghuang commited on

FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584)
1e1921b
unverified

alimosavian Ali Mosavian winglian commited on

make sure to save on the last step (#1615)
1634ac8
unverified

winglian commited on

fix attention mask collation (#1603)
0298273
unverified

winglian commited on

feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553)
50421c8
unverified

Ram Ram winglian commited on

adding llama3 fastchat conversation monkeypatch (#1539)
b32c08f
unverified

Antoni-Joan Solergibert winglian commited on

ignore the fsdp_config section too (#1606) [skip ci]
fff06af
unverified

winglian commited on

make sure to save the lora adapter at the end of RL/dpo training (#1573)
796a085
unverified

winglian commited on

improve tool handling roles (#1587)
cb78a36
unverified

winglian commited on

feat: exclude mamba blocks for jamba (#1578)
8b9c15b
unverified

Nanobit commited on

Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575)
9e1480e
unverified

chiragjn commited on

Gradio configuration parameters (#1591)
3367fca
unverified

marijnfs Marijn Stollenga Marijn Stollenga winglian commited on

improve save callbacks (#1592)
29cf15a
unverified

winglian commited on

Pass weakref to model in the SIGINT handler to free up model post train function (#1581)
dde02fc
unverified

chiragjn winglian commited on

FIX: TRL trainer preprocessing step was running in one process (#1583)
b9bb169
unverified

Ali Mosavian Ali Mosavian commited on

ADD: warning hub model (#1301)
601c08b
unverified

JohanWork Nanobit commited on

Add debug option for RL dataset preprocessing (#1404)
cc5d31e
unverified

abhinand Nanobit commited on

PoSE context length ext (#1567)
5294653
unverified

winglian commited on

make sure everything stays in the same dtype when using dpo + FSDP (#1559)
68601ec
unverified

winglian commited on