Commit History
feat: enable trl's autounwrap (#1060)
b432889
unverified
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified
misc fixes from #943 (#1086) [skip ci]
23495a8
unverified
optimize calculation of cu_seqlens from position_ids (#1084) [skip ci]
90036eb
unverified
fix: warn user to install mamba_ssm package (#1019)
d69ba2b
unverified
additional logging to get maximum token length of a sequence in the dataset (#1066) [skip ci]
2f2582e
unverified
update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified
swap the data collator for evals if not using sample packing (#1076)
ead34c5
unverified
paired kto support (#1069)
d7057cc
unverified
Add: mlflow for experiment tracking (#1059) [skip ci]
090c24d
unverified
fix double eos token for chatml (#1054) [skip ci]
651b7a3
unverified
fix: torch_dtype mistral default to fp32 (#1050)
c3e8165
unverified
Phi2 rewrite (#1058)
732851f
unverified
streaming multipack for pretraining dataset (#959)
553c80f
unverified
feat: always push checkpoint to hub if set (#1049) [skip ci]
cbdbf9e
unverified
feature: better device mapping for large models (#918)
bdfefaf
unverified
set default for merge (#1044)
63fb3eb
unverified
fix model card upload for PEFT models (#1043)
31d2350
unverified
RL/DPO (#935)
f243c21
Added chatglm3 conversation type for training models like TinyLLama (#1036)
59b2d30
unverified
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
chore(config): clean up old log for Qwen (#1034)
74532dd
unverified
use recommended setting for use_reentrant w gradient checkpointing (#1021)
4d2e842
unverified
Fix: bf16 support for inference (#981)
3678a6c
unverified
Adds chat templates (#1022)
f8ae59b
unverified
[WandB] Push axolotl config to top level wandb files (#1014)
4f4d638
unverified
add ultrachat prompt strategies (#996)
ba043a3
unverified
feat: remove need to add load_in* during merge (#1017)
f6ecf14
unverified
remove landmark attn and xpos rope implementations (#1010)
70b46ca
unverified
add config to model card (#1005)
85dd4d5
unverified
FEAT: add tagging support to axolotl (#1004)
db9094d
unverified
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
fix mistral prompt assembly (#982)
7bbaac9
unverified
fix: add lr scheduler kwargs to Trainer (#972)
13e9381
unverified
Fix prompt assembly for llama (#952)
5ada140
unverified
fix: switch to using the HuggingFace Transformers NEFT implementation (#941)
ef24342
unverified
kallewoof
commited on
Fix Deepspeed loading (#950)
5ea3aa3
unverified
Flash attn hotfix (#951)
f1f60cb
unverified
fix: remove excessive newlines in system prompt(s) for alpaca (#936)
450e04d
unverified
kallewoof
commited on