qwerrwe / src

Commit History

Merge branch 'main' into flash-optimum
fd2c981
unverified

winglian commited on

Merge pull request #187 from OpenAccess-AI-Collective/strip-peft-device-map
93dacba
unverified

winglian commited on

Merge pull request #177 from NanoCode012/fix/landmark-patch
8002ffb
unverified

winglian commited on

Merge pull request #192 from OpenAccess-AI-Collective/sharegpt-custom-prompt
74ef5cc
unverified

winglian commited on

Merge branch 'main' into strip-peft-device-map
5e616d9
unverified

winglian commited on

Merge pull request #159 from AngainorDev/patch-1
8e568bb
unverified

Nanobit commited on

add typehints
c7dee56

winglian commited on

add new sharegpt, refactor prompt so it can be customized later, add exception if no data is processed
aac4b76

winglian commited on

add check for attr
c9a149f

winglian commited on

new validation for mpt w grad checkpoints
14668fa

winglian commited on

Fix strict and Lint
b565ecf

Angainor commited on

match up gradient checkpointing when using lora w config
fe0b768

winglian commited on

Fix set mem_id for inference and refactor
974dc00

Nanobit commited on

Clean up landmark patching
a6190c8

Nanobit commited on

Fix undefined LlamaForCausalLM and del try except
563b6d8

Nanobit commited on

peft no longer needs device_map
cd0a6f6

winglian commited on

Refactor landmark attention patch
919727b

Nanobit commited on

fix formatting
958da70

winglian commited on

Fix missing cfg.
a808bf9
unverified

Angainor Development commited on

Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified

winglian commited on

address PR feedback
0c6f928

winglian commited on

add streaming dataset support for pretraining datasets
eea2731

winglian commited on

more gpt-neox long ctx fixes
ab5cd28

winglian commited on

fix bettertransformers save, force it to skip after saving correctly in callback
1a82082

winglian commited on

more tweaks to do pre-training with bettertransformers
1210dc8

winglian commited on

experimental expansion of ctx len
488a67d

winglian commited on

add validation/warning for bettertransformers and torch version
71a43f8

winglian commited on

add support for opimum bettertransformers
1edc30c

winglian commited on

fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1

winglian commited on

Merge branch 'main' into patch-1
79e2a6f
unverified

Angainor Development commited on

add support to extend context with xpos rope
a03a7d7

winglian commited on

fix for max sequence len across different model types
7f09106

winglian commited on

Fix backward compat for peft
aefb2fc

Nanobit commited on

WIP: Rely on cfg.inference
813cfa4
unverified

Angainor Development commited on

Fix grad checkpoint and outputs param
2a801b0

Nanobit commited on

Fix patching via import instead of hijacking
e44c9e0

Nanobit commited on

Feat: Add landmark attention
55b8542

Nanobit commited on

Disable Wandb
f4df266

Bruno Cabral commited on

Refactor out unmodified save_steps and eval_steps
2ef4634

Nanobit commited on

Set to use cfg.seed or 42 for backward compat
2cfe9e9

Nanobit commited on

Fix failing test
bfd27ba

Nanobit commited on

Validate falcon with fsdp
babf0fd

Nanobit commited on

Fix future deprecate prepare_model_for_int8_training
df9528f

Nanobit commited on

Fix training over existing lora
193c73b
unverified

Angainor Development commited on

fix camel ai, add guanaco/oasst mapping for sharegpt
59bb219

winglian commited on

new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
4ac9e25

winglian commited on

Update doc for grad_accu and add validation tests for batch size
3c71c8d

Nanobit commited on

fix batch size calculation
5a631b3

winglian commited on

fix packing so that concatenated sequences reset the attention
9b8585d

winglian commited on