Commit History
Phi2 multipack (#1173)
814aee6
unverified
winglian
commited on
Feat(test): Add tests for alpaca chatml prompt tokenizer (#1088)
5439707
unverified
Falcon embeddings (#1149) [skip docker]
e799e08
unverified
winglian
commited on
Deprecate max packed sequence len (#1141)
2ce5c0d
unverified
winglian
commited on
Multipack simplify for Mixtral (#1142)
6910e6a
unverified
winglian
commited on
Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified
xzuyn
commited on
Enable or disable bf16 support based on availability (#1116)
0865613
unverified
Simon Hällqvist
commited on
keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified
winglian
commited on
add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified
winglian
commited on
update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified
winglian
commited on
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified
winglian
commited on
attempt to also run e2e tests that needs gpus (#1070)
788649f
unverified
winglian
commited on
fix double eos token for chatml (#1054) [skip ci]
651b7a3
unverified
winglian
commited on
Phi2 rewrite (#1058)
732851f
unverified
winglian
commited on
streaming multipack for pretraining dataset (#959)
553c80f
unverified
RL/DPO (#935)
f243c21
winglian
commited on
bump transformers and update attention class map name (#1023)
bcc78d8
unverified
winglian
commited on
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified
Nanobit
commited on
fix mistral prompt assembly (#982)
7bbaac9
unverified
hamel
commited on
Fix prompt assembly for llama (#952)
5ada140
unverified
Respect sequence_len in config for `type: llama2_chat` (#926)
f1de29d
unverified
hamel
commited on
support for mamba (#915)
40a6362
unverified
winglian
commited on
Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified
Nanobit
commited on
Feat: Add warmup_ratio (#893)
fb12895
unverified
Nanobit
commited on
Phi update 202311 (#876)
9bf854e
unverified
winglian
commited on
add e2e tests for checking functionality of resume from checkpoint (#865)
b3a61e8
unverified
winglian
commited on
use temp_dir kwarg instead
6dc68a6
winglian
commited on
missing dunder-init
7de6a56
winglian
commited on
chore: lint
c74f045
winglian
commited on
make sure to cleanup tmp output_dir for e2e tests
0402d19
winglian
commited on
simplify by removing duplicate base_model_config (#772)
2d8def6
unverified
winglian
commited on
Fix: Warn when fullfinetune without adapter (#770)
44c9d01
unverified
Nanobit
commited on
convert exponential notation lr to floats (#771)
ca84cca
unverified
winglian
commited on
Fix: eval table conflict with eval_sample_packing (#769)
9923b72
unverified
Nanobit
commited on
remove lora fused packing test (#758)
21cf09b
unverified
winglian
commited on
Implement fused modules (#747)
15d3a65
unverified
misc sharegpt fixes (#723)
f30afe4
unverified
winglian
commited on
Feat: Allow usage of native Mistral FA when no sample_packing (#669)
697c50d
unverified
Nanobit
commited on
add mistral e2e tests (#649)
5b0bc48
unverified
winglian
commited on
Fix(cfg): Add validation for save_strategy and eval_strategy (#633)
383f88d
unverified
Nanobit
commited on
use fastchat conversations template (#578)
e7d3e2d
unverified
winglian
commited on
Fix: Fail bf16 check when running on cpu during merge (#631)
cfbce02
unverified
Nanobit
commited on
better handling and logging of empty sharegpt turns (#603)
a363604
unverified
winglian
commited on
misc fixes to add gptq tests (#621)
03e5907
unverified
winglian
commited on
Support Sample packing for phi arch (#586)
12a2dbb
unverified
winglian
commited on