Commit History
Refactor landmark attention patch
919727b
Nanobit
commited on
Fix missing cfg.
a808bf9
unverified
Angainor Development
commited on
Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified
winglian
commited on
more gpt-neox long ctx fixes
ab5cd28
winglian
commited on
more tweaks to do pre-training with bettertransformers
1210dc8
winglian
commited on
add support for opimum bettertransformers
1edc30c
winglian
commited on
fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1
winglian
commited on
Merge branch 'main' into patch-1
79e2a6f
unverified
Angainor Development
commited on
add support to extend context with xpos rope
a03a7d7
winglian
commited on
fix for max sequence len across different model types
7f09106
winglian
commited on
Fix backward compat for peft
aefb2fc
Nanobit
commited on
WIP: Rely on cfg.inference
813cfa4
unverified
Angainor Development
commited on
Fix patching via import instead of hijacking
e44c9e0
Nanobit
commited on
Feat: Add landmark attention
55b8542
Nanobit
commited on
Fix future deprecate prepare_model_for_int8_training
df9528f
Nanobit
commited on
Fix training over existing lora
193c73b
unverified
Angainor Development
commited on
new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
4ac9e25
winglian
commited on
Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix
2d0ba3b
unverified
winglian
commited on
remove unused import and update readme
e3c494c
winglian
commited on
copy xformers attn from ooba since we removed dep on alpaca_lora_4bit
6cb2310
winglian
commited on
fix up tokenizer config, isort fix
39a208c
winglian
commited on
split up llama model loading so config can be loaded from base config and models can be loaded from a path
2520ecd
winglian
commited on
Fix incorrect rebase
594e72b
Nanobit
commited on
fix relative path for fixtures
cfcc549
winglian
commited on
Apply isort then black
37293dc
Nanobit
commited on
Fix mypy typing
e9650d3
Nanobit
commited on
Lint models.py
f4e5d86
Nanobit
commited on
fix relative path for fixtures
e65aeed
winglian
commited on
refactor: fix previous refactors
56f9ca5
Nanobit
commited on
Refactor to use DictDefault instead
8bd7a49
Nanobit
commited on
Convert attrdict to addict
bdfe7c9
Nanobit
commited on
Merge pull request #67 from OpenAccess-AI-Collective/refactor-tokenizer-load
0d4a7f4
unverified
winglian
commited on
Merge branch 'main' into refactor/rename-4b-to-gptq
147241c
unverified
winglian
commited on
fix auto linear modules for lora w/o any set already
4c90633
winglian
commited on
refactor(param): rename load_4bit config param by gptq
dd00657
Thytu
commited on
load the tokenizer seperately from the model
32e6fe9
winglian
commited on
Add cfg.lora_target_linear
9196237
Nanobit
commited on
qlora and 4bit check so we are able to merge and unload
1987e5c
winglian
commited on
fix merge conflict failure, black format
7b5e762
winglian
commited on
fixes to make qlora actually work
34c99f9
winglian
commited on
fix tokenizer loading, got openllama 3b working
e396654
winglian
commited on
stray s
f523a08
winglian
commited on
cfg.cfg fix, also de-dupe lora module list
676d7da
winglian
commited on
fix tuple add to list
a8771b0
winglian
commited on
attempt to find linear modules for qlora
ffd1043
winglian
commited on
apply black formatting
ce34d64
winglian
commited on