qwerrwe / src

Commit History

Switch to parallel FFD bin packing algorithm. (#1619)
367b2e8
unverified

winglian daaave commited on

support for custom messages field in sharegpt (#1651)
bbfed31
unverified

winglian commited on

enable loraplus setting for dpo trainer (#1646)
a27d5e1
unverified

thepowerfuldeez commited on

allow report_to for multiple providers (#1647)
6299eb5
unverified

winglian commited on

Fix llama3 chat_template (extra <|eot_id|> on last turn) (#1635)
7c2bf30
unverified

leonardlin winglian commited on

fixes to save on fractional save_steps (#1643)
ba45531
unverified

winglian commited on

Unsloth optims for Llama (#1609)
8a1572a
unverified

winglian commited on

add save_only_model option (#1634)
702a669
unverified

emozilla commited on

Fix `total_num_steps` (#1566)
81da7d2
unverified

bofenghuang commited on

FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584)
1e1921b
unverified

alimosavian Ali Mosavian winglian commited on

make sure to save on the last step (#1615)
1634ac8
unverified

winglian commited on

fix attention mask collation (#1603)
0298273
unverified

winglian commited on

feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553)
50421c8
unverified

Ram Ram winglian commited on

adding llama3 fastchat conversation monkeypatch (#1539)
b32c08f
unverified

Antoni-Joan Solergibert winglian commited on

ignore the fsdp_config section too (#1606) [skip ci]
fff06af
unverified

winglian commited on

make sure to save the lora adapter at the end of RL/dpo training (#1573)
796a085
unverified

winglian commited on

improve tool handling roles (#1587)
cb78a36
unverified

winglian commited on

feat: exclude mamba blocks for jamba (#1578)
8b9c15b
unverified

Nanobit commited on

Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575)
9e1480e
unverified

chiragjn commited on

Gradio configuration parameters (#1591)
3367fca
unverified

marijnfs Marijn Stollenga Marijn Stollenga winglian commited on

improve save callbacks (#1592)
29cf15a
unverified

winglian commited on

Pass weakref to model in the SIGINT handler to free up model post train function (#1581)
dde02fc
unverified

chiragjn winglian commited on

FIX: TRL trainer preprocessing step was running in one process (#1583)
b9bb169
unverified

Ali Mosavian Ali Mosavian commited on

ADD: warning hub model (#1301)
601c08b
unverified

JohanWork Nanobit commited on

Add debug option for RL dataset preprocessing (#1404)
cc5d31e
unverified

abhinand Nanobit commited on

PoSE context length ext (#1567)
5294653
unverified

winglian commited on

make sure everything stays in the same dtype when using dpo + FSDP (#1559)
68601ec
unverified

winglian commited on

wrap prepared_ds_path in str() to avoid TypeError in fsspec package (#1548)
7477a53
unverified

Frank Ruis winglian commited on

ORPO Trainer replacement (#1551)
7d1d22f
unverified

winglian commited on

Unsloth gradient checkpointing offload (#1528)
6319da1
unverified

winglian commited on

DBRX Model Support (#1462)
132eb74
unverified

winglian commited on

Update SaveAxolotlConfigtoWandBCallback to use artifact instead of save (#1483)
5ed2939
unverified

tcapelle winglian commited on

use locale agnostic seperator to make large nums easier to read (#1503)
da9b1a3
unverified

winglian commited on

WIP: Support table logging for mlflow, too (#1506)
057fa44
unverified

DavidFarago Dave Farago winglian commited on

Correctly handle splits for datasets.arrow_dataset.Dataset objects (#1504)
8fa0785
unverified

scottifer8 winglian commited on

Print versions (#1496)
4313b1a
unverified

winglian commited on

add field to sft dataset pydantic for completion support (#1497)
ff01c45
unverified

winglian commited on

ignore issues with calculating # params when printing (#1493)
2fa65b9
unverified

winglian commited on

Remove `validate_quantized_dora` (#1485)
9430b6e
unverified

xzuyn commited on

drop empty token from beginning if tokenizer has no bos_token (in the case of qwen) (#1490)
934fc85
unverified

winglian commited on

fix: reduce sample_packing warning (#1484)
bda48f0
unverified

Nanobit commited on

feat: validate sample packing requires flash_attention (#1465)
bf4cd67
unverified

Nanobit commited on

add support for cohere chat template (#1478)
05b0b7e
unverified

winglian commited on

don't use deepspeed or fsdp when merging loras (#1479)
87ca3f9
unverified

winglian commited on

refactor utils.data module for line count linter (#1476)
e0fcef4
unverified

winglian commited on

Pretrain multipack v2 (#1470)
5aa5097
unverified

winglian commited on

fix pretraining_ on odd datasets (#1463)
586bd8d
unverified

monsoon-nlp commited on