Commit History

cfg.cfg fix, also de-dupe lora module list
676d7da

winglian commited on

fix tuple add to list
a8771b0

winglian commited on

Update src/axolotl/utils/models.py
1cf21da
unverified

winglian Nanobit commited on

attempt to find linear modules for qlora
ffd1043

winglian commited on

apply black formatting
ce34d64

winglian commited on

Merge branch 'main' of github.com:OpenAccess-AI-Collective/axolotl into dev
ce694e2

winglian commited on

remove un-needed code, add validation
1f5d83e

winglian commited on

fix: handles AutoTokenizer from untrusted source
88ad05d
unverified

Valentin De Matos commited on

more qlora support
e8aacfb

winglian commited on

prepare does all this already for qlora?
b9d07aa

winglian commited on

integrate qlora? maybe?
3b4d055

winglian commited on

fix missing fp16 kwarg
2ae936f

winglian commited on

Add qa style data for alpaca instructions, fix one_cycle scheduler
3a50377

winglian commited on

don't need to set here
de6da13

winglian commited on

be able to use adam bnb 8bit and one cycle scheduler w fsdp
9493b1b

winglian commited on

Update src/axolotl/utils/models.py for info msg
1b3e401
unverified

winglian Nanobit commited on

Update src/axolotl/utils/data.py for spelling
98a6781
unverified

winglian Nanobit commited on

make sure to use train split if loading from hf
607a4d3

winglian commited on

make one cycle lr div factor configurable
99383f1

winglian commited on

fix new dataset prompt tokenizers
0f74464

winglian commited on

pygmalion dataset prompts format, cached tokenized datasets should be hashed on the tokenizer too
2809f3f

winglian commited on

tokenization fixes
4ea9a66

winglian commited on

optionally be able to specify alpaca or chat style prompts
1d5ab84

winglian commited on

Set `half` using `cfg.fp16` for 4bit
641f801
unverified

Nanobit commited on

concise multiple choice and tldr summarize
1365073

winglian commited on

support for replit lm
8c2f3cb

winglian commited on

add alpaca multiple choice instruct dataset support
b46bc02

winglian commited on

Add `lora_modules_to_save`
2c73c81
unverified

Nanobit commited on

more fixes
bdbca8f

winglian commited on

more fixes
42410c7

winglian commited on

fix torch_dtype for model load
aef00b6

winglian commited on

move filter to before saving so it doesn't happen everytime, update runpod manual script
0d28df0

winglian commited on

whoops, gt vs lt
84c7bc4

winglian commited on

optimize dataloading to use cache, fix model token embedding sizes
aa3c3f9

winglian commited on

Merge branch 'main' into patch-2
89b7f26
unverified

Nanobit commited on

black formatting
2bc1a5b

winglian commited on

various fixes
7a490a4

winglian commited on

Fix Trainer() got multiple values for keyword argument 'callbacks'
813aab3
unverified

Nanobit commited on

testing mpt triton
e2e68c3

winglian commited on

fix conditional so alpaca doesn't choke
a27d594

winglian commited on

Add CompletionPrompt type
cf68153

Nanobit commited on

Merge pull request #21 from NanoCode012/patch-1
bd3c5a5
unverified

winglian commited on

Merge pull request #19 from NanoCode012/feat/callback-save-lora
bcbc99e
unverified

winglian commited on

Update trainer.py
36aaea0
unverified

Nanobit commited on

Fix condition scheduler
5b6690a
unverified

Nanobit commited on

add support for trust_remote_code for mpt models
a125693

winglian commited on

Add callbacks to Trainer
cc77bab

Nanobit commited on

Add callback save peft_model on_save
0d6708b

Nanobit commited on

Jeopardy bot! (#17)
a12fb0a
unverified

winglian commited on

fix #16 load best model setting when using 8bit
a4329b1

winglian commited on