Commit History
added tiny llama examples for lora and qlora (#1027)
c75f916
unverified
Tim Dolan
commited on
new evals_per_epoch and saves_per_epoch to make things cleaner (#944)
5f79b82
unverified
winglian
commited on
Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified
Nanobit
commited on
various bugfixes (#856)
1470650
unverified
winglian
commited on
don't compile deepspeed or bitsandbytes from source (#837)
f544ab2
unverified
winglian
commited on
fix eval_steps to be a sane default (#797)
8b79ff0
unverified
winglian
commited on
simplify by removing duplicate base_model_config (#772)
2d8def6
unverified
winglian
commited on
Implement fused modules (#747)
15d3a65
unverified
prepared dataset caching, other misc fixes (#665)
e50a64e
unverified
winglian
commited on
eval_table isn't quite stable enough to be in default llama configs (#637)
d887ad8
unverified
winglian
commited on
default model changed
4fecbfe
mhenrichsen
commited on
support to disable exllama for gptq (#604)
faecff9
unverified
winglian
commited on
Add training callback to send predictions to WandB table (#521)
5b67ea9
unverified
Glavin001
commited on
recommend padding when using sample packing (#531)
3437149
unverified
winglian
commited on
Add support for GPTQ using native transformers/peft (#468)
3355706
unverified
winglian
commited on
Add example Llama 2 ReLoRA config (#471)
fe4d6ba
unverified
chargoddard
commited on
don't use mask expansion for inference (#392)
1687be6
unverified
winglian
commited on
new llama-2 default settings (#370)
fdffef5
unverified
Add wandb_entity to wandb options, update example configs, update README (#361)
7019509
unverified
set group_by_length to false in examples
36fefcf
tmm1
commited on