Commit History

Clean up landmark patching
a6190c8

Nanobit commited on

Fix undefined LlamaForCausalLM and del try except
563b6d8

Nanobit commited on

peft no longer needs device_map
cd0a6f6

winglian commited on

Update FAQS.md
dd7d16d
unverified

Akj2023 commited on

Refactor landmark attention patch
919727b

Nanobit commited on

Update FAQS.md
5ffefee
unverified

Akj2023 commited on

Merge pull request #183 from OpenAccess-AI-Collective/inference-from-stdin
d9f713e
unverified

winglian commited on

fix formatting
958da70

winglian commited on

pass a prompt in from stdin for inference
c4e4f81

winglian commited on

Fix missing cfg.
a808bf9
unverified

Angainor Development commited on

Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified

winglian commited on

Update scripts/finetune.py
759e867
unverified

winglian Nanobit commited on

address PR feedback
0c6f928

winglian commited on

add streaming dataset support for pretraining datasets
eea2731

winglian commited on

linting fix
1db46a9

winglian commited on

more gpt-neox long ctx fixes
ab5cd28

winglian commited on

fix bettertransformers save, force it to skip after saving correctly in callback
1a82082

winglian commited on

more tweaks to do pre-training with bettertransformers
1210dc8

winglian commited on

experimental expansion of ctx len
488a67d

winglian commited on

add validation/warning for bettertransformers and torch version
71a43f8

winglian commited on

use pythia-12b, neox-20b is flaky
3961902

winglian commited on

add flash attn context for efficient training and attempt setting model to train mode:
8792199

winglian commited on

add support for opimum bettertransformers
1edc30c

winglian commited on

fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1

winglian commited on

Merge pull request #181 from OpenAccess-AI-Collective/xpos-rope
41e4f6c
unverified

winglian commited on

Merge branch 'main' into patch-1
79e2a6f
unverified

Angainor Development commited on

Remove explicit definition of cfg.inference
c250898
unverified

Angainor Development commited on

Merge pull request #180 from Glavin001/feat/stream-inference
215d775
unverified

winglian commited on

formatting for linter
f36e227
unverified

winglian commited on

add option to readme
5878bb1

winglian commited on

add support to extend context with xpos rope
a03a7d7

winglian commited on

Add streaming inference & fix stopping at EOS
fec6bcc

Glavin001 commited on

Merge pull request #179 from OpenAccess-AI-Collective/fix-max_seq_len
931e606
unverified

winglian commited on

fix for max sequence len across different model types
7f09106

winglian commited on

Merge pull request #178 from PocketDocLabs/main
6b50200
unverified

Nanobit commited on

Update README.md to reflect current gradient checkpointing support
16f9e28
unverified

PocketDoc commited on

Merge pull request #176 from NanoCode012/fix/peft-import
b9083a7
unverified

Nanobit commited on

Fix backward compat for peft
aefb2fc

Nanobit commited on

Merge pull request #169 from NanoCode012/feat/landmark
b5aa8d8
unverified

Nanobit commited on

Merge pull request #171 from OpenAccess-AI-Collective/NanoCode012-falcon-lora-matrix
4d6490b
unverified

Nanobit commited on

Fix falcon support lora
b242b69
unverified

Nanobit commited on

Merge pull request #170 from OpenAccess-AI-Collective/NanoCode012-lambdalabs-fix
320beb2
unverified

Nanobit commited on

Feed cfg.inference
bd3b537
unverified

Angainor Development commited on

WIP: Rely on cfg.inference
813cfa4
unverified

Angainor Development commited on

Improve lambda labs instruction
2e13cef
unverified

Nanobit commited on

Fix grad checkpoint and outputs param
2a801b0

Nanobit commited on

Fix patching via import instead of hijacking
e44c9e0

Nanobit commited on

Feat: Add landmark attention
55b8542

Nanobit commited on