Commit History

note pattern when using groups
b4d1d22

tmm1 commited on

update comment for group_by_length
9f99104

tmm1 commited on

set group_by_length to false in examples
36fefcf

tmm1 commited on

ensure enable_input_require_grads is called on model before getting the peft model (#345)
176b888
unverified

winglian commited on

experimental llama 2 chat support (#296)
3392270
unverified

Jan Philipp Harries Jan Philipp Harries commited on

add a basic ds zero3 config (#347)
bb53a16
unverified

winglian commited on

Update XFormers Attention Monkeypatch to handle Llama-2 70B (GQA) (#339)
10405b9
unverified

ssmi153 commited on

Added Orca Mini prompt strategy (#263)
c93655c
unverified

Jan Philipp Harries Jan Philipp Harries commited on

optimize the iteration when tokenizeing large datasets (#332)
fe28543
unverified

winglian commited on

Merge pull request #336 from tmm1/flash-attn
0d2e34f
unverified

tmm1 commited on

Merge pull request #337 from tmm1/readme-fix
b56a6c0
unverified

tmm1 commited on

fix typo
2eda9e0

tmm1 commited on

scope flash-attn+qlora fix correctly, scope to llama, add comment
78b9efb

tmm1 commited on

move flash-attn monkey patch alongside the others
312a9fa

tmm1 commited on

python 3.10 and 3.11 both work fine, as does pytorch 2.1.0.dev
58d6659

tmm1 commited on

there is no configs folder
cc7e800

tmm1 commited on

feat/llama-2 examples (#319)
dc71d88
unverified

mhenrichsen Mads Henrichsen commited on

ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype
248bf90

tmm1 commited on

qlora w flash attention fixes (#333)
77085ea
unverified

winglian commited on

add peft install back since it doesn't get installed by setup.py (#331)
db2a358
unverified

winglian commited on

pin accelerate so it works with llama2 (#330)
6c9a87c
unverified

winglian commited on

fix FSDP save of final model (#329)
894cba0
unverified

winglian commited on

update README for updated docker images (#328)
41a4d15
unverified

winglian commited on

Prune cuda117 (#327)
2c37bf6
unverified

winglian commited on

latest HEAD of accelerate causes 0 loss immediately w FSDP (#321)
9f69c4d
unverified

winglian commited on

update prompts for open orca to match the paper (#317)
3d4984b
unverified

winglian commited on

disable gh cache for first step of docker builds too
ff7f18d

winglian commited on

add runpod envs to .bashrc, fix bnb env (#316)
cf62cfd
unverified

winglian commited on

don't use the gha cache w docker
c5df969

winglian commited on

Merge pull request #307 from OpenAccess-AI-Collective/xgen-user-sharegpt-tokens
40a53ff
unverified

winglian commited on

Merge pull request #306 from ethanhs/xgen
dcdec44
unverified

winglian commited on

Merge pull request #313 from OpenAccess-AI-Collective/tokenizer-llama2-embeddings
3ffb018
unverified

winglian commited on

Merge pull request #299 from OpenAccess-AI-Collective/flash-attention-2
a94f2ee
unverified

winglian commited on

don't resize embeddings to multiples of 32x by default
1066751

winglian commited on

Merge pull request #308 from OpenAccess-AI-Collective/apache2-license
1b63bf1
unverified

winglian commited on

add apache 2.0 license
5cce2a4

winglian commited on

better handling since xgen tokenizer breaks with convert_tokens_to_ids
2a428e8

winglian commited on

pin flash attention 2 to the fix for backwards pass
cdf85fd

winglian commited on

flash attention 2
9b790d3

winglian commited on

Add XGen info to README and example config
3881143

ethanhs commited on

Merge pull request #304 from OpenAccess-AI-Collective/NanoCode012-patch-1
06c61d6
unverified

Nanobit commited on

Merge pull request #300 from OpenAccess-AI-Collective/pytorch-201
262dc29
unverified

winglian commited on

Fix(readme): Improve wording for push model
165907f
unverified

Nanobit commited on

fix sdp attention to use the flash/mem-efficient context manaager
a032c9f

winglian commited on

explicitly pin flash attention 1 to v1.0.9
b06d3e3

winglian commited on

use pytorch 2.0.1
c58034d

winglian commited on

Merge pull request #293 from NanoCode012/fix/tokenize-speed
28fd429
unverified

Nanobit commited on

feat: use multi-core
45ac7c4

Nanobit commited on

Merge pull request #289 from OpenAccess-AI-Collective/hf_transfer
edd6980
unverified

winglian commited on

Merge pull request #288 from OpenAccess-AI-Collective/NanoCode012-patch-1
dc6d251
unverified

winglian commited on