more checks and fixes for deepspeed and fsdp (#1208) [skip ci] e923e62 unverified winglian commited on Jan 26
workaround for transformers bug requireing do_sample for saveing pretrained (#1206) ba944e6 unverified winglian commited on Jan 25
make sure to register the base chatml template even if no system message is provided (#1207) badda37 unverified winglian commited on Jan 25
precompute dpo logprobs setting and fixes (#1199) [skip ci] 33e1170 unverified winglian commited on Jan 25
fix learning rate scheduler's warnings (#1135) [skip ci] b4ac96a unverified ricdomolm winglian commited on Jan 25
Feat/chatml add system message (#1117) 98b4762 unverified mhenrichsen Mads Henrichsen winglian commited on Jan 25
fix(log): improve warning to clarify that lora_modules_to_save expect a list (#1197) 08719b9 unverified Nanobit commited on Jan 25
Standardize system prompt format for AlpacaPrompter (#1190) [skip ci] af02430 unverified Oleh Kuznetsov commited on Jan 24
more dpo fixes for dataset loading and docs (#1185) [skip ci] 5bce45f unverified winglian commited on Jan 24
Fix generation_config validation raises Exception for do_merge_lora (#1184) 02f2c72 unverified tisorlawan commited on Jan 24
Add support for offline mode with HF_HUB_OFFLINE envvar (#1182) 71141de unverified James Wade winglian commited on Jan 24
don't fail if can't cast weights due to offload when merging (#1172) [skip ci] fb7f9b9 unverified winglian commited on Jan 23
support for explicit test_dataset definition for evals (#786) cda52dc unverified winglian commited on Jan 23
improve vram use w gradient checkpointing (#1167) [skip ci] 802f966 unverified winglian commited on Jan 23
Add mlflow callback for pushing config to mlflow artifacts (#1125) b8e5603 unverified JohanWork commited on Jan 22
set fp16 to false if bf16, update bf16: auto in example YAMLs (#1122) [skip ci] 782b6a4 unverified winglian Nanobit commited on Jan 22
make sure the model config loader respects the model_revision too (#1160) [skip-ci] fccb542 unverified winglian commited on Jan 22
feat(dataset): add config to keep processed dataset in memory (#1152) 3db5f2f unverified Nanobit commited on Jan 20
Add shifted sparse attention (#973) [skip-ci] 1d70f24 unverified jrc joecummings winglian commited on Jan 18
fix(preprocess): Make sure dataset not loaded from cache when using preprocess cli (#1136) 1e56b88 unverified Nanobit commited on Jan 17
Enable or disable bf16 support based on availability (#1116) 0865613 unverified Simon Hällqvist commited on Jan 14
Disable caching on `--disable_caching` in CLI (#1110) d66b101 unverified casperhansen winglian commited on Jan 13
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083) 78c5b19 unverified winglian commited on Jan 11
optimize calculation of cu_seqlens from position_ids (#1084) [skip ci] 90036eb unverified winglian commited on Jan 10
additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci] 2f2582e unverified winglian commited on Jan 10
update sharegpt conversations when chatml chat template is set (#1075) [skip ci] 0ce1a65 unverified winglian commited on Jan 10
fix: `train_on_inputs: true` ignored for sharegpt (#1045) [skip ci] 043c386 unverified Nanobit winglian commited on Jan 10
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci] 0f10080 unverified winglian commited on Jan 10
swap the data collator for evals if not using sample packing (#1076) ead34c5 unverified winglian commited on Jan 10