Commit History
support for explicit test_dataset definition for evals (#786)
cda52dc
unverified
winglian
commited on
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
winglian
commited on
Vram fix attempt (#1164) [skip ci]
32580c1
unverified
winglian
commited on
improve vram use w gradient checkpointing (#1167) [skip ci]
802f966
unverified
winglian
commited on
Add mlflow callback for pushing config to mlflow artifacts (#1125)
b8e5603
unverified
JohanWork
commited on
jupyter lab fixes (#1139) [skip ci]
eaaeefc
unverified
winglian
commited on
Qwen2 (#1166)
f5a828a
unverified
winglian
commited on
make sure the model config loader respects the model_revision too (#1160) [skip-ci]
fccb542
unverified
winglian
commited on
Deprecate max packed sequence len (#1141)
2ce5c0d
unverified
winglian
commited on
feat(dataset): add config to keep processed dataset in memory (#1152)
3db5f2f
unverified
Nanobit
commited on
Multipack simplify for Mixtral (#1142)
6910e6a
unverified
winglian
commited on
fix bf16 check when preprocessing data (#1140)
317fa25
unverified
winglian
commited on
fix(preprocess): Make sure dataset not loaded from cache when using preprocess cli (#1136)
1e56b88
unverified
Nanobit
commited on
Preprocess dataset size fix (#1131)
7570446
unverified
winglian
commited on
Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified
xzuyn
commited on
Enable or disable bf16 support based on availability (#1116)
0865613
unverified
Simon Hällqvist
commited on
Reverse caching PR (#1115)
2202a20
unverified
casperhansen
commited on
Disable caching on `--disable_caching` in CLI (#1110)
d66b101
unverified
keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified
winglian
commited on
feat: enable trl's autounwrap (#1060)
b432889
unverified
Nanobit
commited on
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified
winglian
commited on
misc fixes from #943 (#1086) [skip ci]
23495a8
unverified
winglian
commited on
optimize calculation of cu_seqlens from position_ids (#1084) [skip ci]
90036eb
unverified
winglian
commited on
fix: warn user to install mamba_ssm package (#1019)
d69ba2b
unverified
Nanobit
commited on
additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
2f2582e
unverified
winglian
commited on
update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified
winglian
commited on
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified
winglian
commited on
swap the data collator for evals if not using sample packing (#1076)
ead34c5
unverified
winglian
commited on
paired kto support (#1069)
d7057cc
unverified
winglian
commited on
Add: mlflow for experiment tracking (#1059) [skip ci]
090c24d
unverified
fix double eos token for chatml (#1054) [skip ci]
651b7a3
unverified
winglian
commited on
fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified
Nanobit
commited on
Phi2 rewrite (#1058)
732851f
unverified
winglian
commited on
streaming multipack for pretraining dataset (#959)
553c80f
unverified
feat: always push checkpoint to hub if set (#1049) [skip ci]
cbdbf9e
unverified
Nanobit
commited on
feature: better device mapping for large models (#918)
bdfefaf
unverified
set default for merge (#1044)
63fb3eb
unverified
hamel
commited on
fix model card upload for PEFT models (#1043)
31d2350
unverified
hamel
commited on
RL/DPO (#935)
f243c21
winglian
commited on
Added chatglm3 conversation type for training models like TinyLLama (#1036)
59b2d30
unverified
xaviviro
commited on
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
winglian
commited on
chore(config): clean up old log for Qwen (#1034)
74532dd
unverified
Nanobit
commited on
use recommended setting for use_reentrant w gradient checkpointing (#1021)
4d2e842
unverified
winglian
commited on