Commit History
Fix: lowercase `True` values in config (#713)
ace70b3
unverified
atgctg
commited on
Get qlora mistral-7b fine tuning working on a single 4090 (#708)
295b266
unverified
lukemarsden
commited on
fix unneeded space (#699)
f91db19
unverified
mhenrichsen
commited on
lint
83a950b
unverified
mhenrichsen
commited on
new lr, sample pack
4c8ddf2
mhenrichsen
commited on
Fix: Higher vram usage for mistral and sample_packing (#691)
669f1d0
unverified
Nanobit
commited on
Adding qlora config for Mistral (#675)
d4a88e4
unverified
Abhishek Mishra
commited on
prepared dataset caching, other misc fixes (#665)
e50a64e
unverified
winglian
commited on
Update mistral/README.md (#647)
b88f515
unverified
Adarsh Shirawalmath
commited on
Feat: Add example for Mistral (#644)
eb41f76
unverified
Nanobit
commited on
eval_table isn't quite stable enough to be in default llama configs (#637)
d887ad8
unverified
winglian
commited on
Feat: Add support for upstream FA2 (#626)
19a600a
unverified
Nanobit
commited on
default model changed
4fecbfe
mhenrichsen
commited on
support to disable exllama for gptq (#604)
faecff9
unverified
winglian
commited on
more sane defaults for openllama 3b used for quickstarts (#602)
674c576
unverified
winglian
commited on
btlm and falcon monkey patches for flash attn (#566)
6b9b229
unverified
winglian
commited on
make phi training work with Loras (#588)
62eaee7
unverified
winglian
commited on
Support Sample packing for phi arch (#586)
12a2dbb
unverified
winglian
commited on
Fix Codellama examples (#582)
1aa4007
unverified
Doan Minh Phuong
commited on
Phi examples (#569)
2284209
unverified
winglian
commited on
Add training callback to send predictions to WandB table (#521)
5b67ea9
unverified
Glavin001
commited on
recommend padding when using sample packing (#531)
3437149
unverified
winglian
commited on
Add support for GPTQ using native transformers/peft (#468)
3355706
unverified
winglian
commited on
pad_to_worst_case_seq_len boolean, for testing memory limits (#498)
8e197f6
unverified
Feat(cfg): Add code-llama configs for all sizes (#479)
3513071
unverified
Add example Llama 2 ReLoRA config (#471)
fe4d6ba
unverified
chargoddard
commited on
improve llama pad token handling (#475)
cb9797e
unverified
winglian
commited on
don't use mask expansion for inference (#392)
1687be6
unverified
winglian
commited on
new llama-2 default settings (#370)
fdffef5
unverified
Add wandb_entity to wandb options, update example configs, update README (#361)
7019509
unverified
set group_by_length to false in examples
36fefcf
tmm1
commited on
feat/llama-2 examples (#319)
dc71d88
unverified
Add XGen info to README and example config
3881143
ethanhs
commited on
Use AutoTokenizer for redpajama example
945c419
sroecker
commited on
Merge pull request #92 from OpenAccess-AI-Collective/flash-optimum
16bb627
unverified
winglian
commited on
Merge branch 'main' into flash-optimum
fd2c981
unverified
winglian
commited on
tweak config to work
2ba4ae8
winglian
commited on
Merge pull request #193 from OpenAccess-AI-Collective/config-fixes-20230612
94f310c
unverified
winglian
commited on
Fix config path after config moved
52cde69
Nanobit
commited on
config fixes
9a58e99
winglian
commited on
forgot to add this file
6b3f509
winglian
commited on
update openllama and clean up paths
d0d7eaa
winglian
commited on
more pruning
effbbf6
winglian
commited on
more config pruning and migrating
c530e4b
winglian
commited on
get rid of some configs, formalize pythioa lora config
77762a5
winglian
commited on
address PR feedback
0c6f928
winglian
commited on
linting fix
1db46a9
winglian
commited on
use pythia-12b, neox-20b is flaky
3961902
winglian
commited on
Merge pull request #132 from utensil/falcon-7b-qlora
c8242de
unverified
Nanobit
commited on