qwerrwe / src

Commit History

fix prompters, especially the sharegpt prompter
5e37144

winglian commited on

more fixes
bdbca8f

winglian commited on

more fixes
42410c7

winglian commited on

fix torch_dtype for model load
aef00b6

winglian commited on

move filter to before saving so it doesn't happen everytime, update runpod manual script
0d28df0

winglian commited on

whoops, gt vs lt
84c7bc4

winglian commited on

optimize dataloading to use cache, fix model token embedding sizes
aa3c3f9

winglian commited on

Merge branch 'main' into patch-2
89b7f26
unverified

Nanobit commited on

black formatting
2bc1a5b

winglian commited on

various fixes
7a490a4

winglian commited on

Fix Trainer() got multiple values for keyword argument 'callbacks'
813aab3
unverified

Nanobit commited on

testing mpt triton
e2e68c3

winglian commited on

fix conditional so alpaca doesn't choke
a27d594

winglian commited on

Rename variable to use same convention
174b74d

Nanobit commited on

Add CompletionPrompt type
cf68153

Nanobit commited on

Merge pull request #21 from NanoCode012/patch-1
bd3c5a5
unverified

winglian commited on

Merge pull request #19 from NanoCode012/feat/callback-save-lora
bcbc99e
unverified

winglian commited on

Update trainer.py
36aaea0
unverified

Nanobit commited on

Fix condition scheduler
5b6690a
unverified

Nanobit commited on

add support for trust_remote_code for mpt models
a125693

winglian commited on

Add callbacks to Trainer
cc77bab

Nanobit commited on

Add callback save peft_model on_save
0d6708b

Nanobit commited on

Jeopardy bot! (#17)
a12fb0a
unverified

winglian commited on

fix #16 load best model setting when using 8bit
a4329b1

winglian commited on

use micro batch size for eval size if not specified
550502b

winglian commited on

refactor inference, warn if model is frozen
247825b

winglian commited on

Merge pull request #13 from winglian/dev
cb9a887
unverified

winglian commited on

Add eval_batch_size for evaluation
0e74b64

Nanobit commited on

fix log sweep lr
a10a826

winglian commited on

support for multi line inference input, log sweep over learning rates
9105935

winglian commited on

fix adam bnb optimizer grouped parameters, fix peft model 8bit conversion logic, black formatting
7748f3d

winglian commited on

support llama-adapter zero init attention
2255bb7

winglian commited on

fdsp config dict fix, todo list, add torchdistx support
ad2b48c

winglian commited on

8bit and deepspeed changes
9190ada

winglian commited on

don't load models in 8bit unless they are using an adapter, also fix tokenizer load in exceptional case
6dfdd2d

winglian commited on

fix fsdp training args
29936bb

winglian commited on

fix for zero value warmup steps
7882181

winglian commited on

fix sharegpt tokenization, refactor tokenization debugging
5159d00

winglian commited on

wire up gradient checkpointing for 4bit
c0f50d9

winglian commited on

fix dataset handling, support galactica
4a17a4c

winglian commited on

tweaks to data loading, 8 bit adam, accelerate and deepspeed
097d367

winglian commited on

shuffle and split dataset after save/load
4f2584f

winglian commited on

fix sharegpt handling from hf, don't worry about loading llama if using earlier transformers release
8d43785

winglian commited on

various bugfixes
94f5e41

winglian commited on

fix bug when model_type not explicitly passed
bb991fd

winglian commited on

improve inference
d653859

winglian commited on

quickstart instructions for starting from runpod (#5)
0a472e1
unverified

winglian commited on

attempt xformers hijack attention
8746b70

winglian commited on

WIP large refactor to make finetune script a little more manageable (#3)
6045345
unverified

winglian commited on

add support for alpaca reflect training (#2)
81de0ef
unverified

winglian commited on