improve: Enhance code readability of prompt_tokenizers.py (#707) 3a99495 unverified seungduk commited on Oct 19, 2023
Fix(model): Linear detected and added to target module with rope linear (#738) 440c3ab unverified Nanobit commited on Oct 19, 2023
catch ConnectionError when checking dataset from HuggingFace (#743) 992d57f unverified Napuh commited on Oct 19, 2023
Mistral: Sliding Window Attention with Flash Attention and Sample Packing (#732) a045db0 unverified casperhansen winglian commited on Oct 16, 2023
fixes for alpaca w chatml, and don't include attention_mask w mistral for flash attention (#728) 3553172 unverified winglian commited on Oct 14, 2023
tweak for xformers install w pytorch 2.1.0 (#727) 7f2027d unverified winglian commited on Oct 13, 2023
workaround for installing xformers w torch 2.1.0 (#725) 8d288a2 unverified winglian commited on Oct 13, 2023
fix pytorch 2.1.0 build, add multipack docs (#722) 2aa1f71 unverified winglian commited on Oct 13, 2023
improve handling of the prepared ds path and other cfg defaults (#701) 1c412c7 unverified winglian commited on Oct 13, 2023
Save Axolotl config as WandB artifact (#716) 490923f unverified Jan Philipp Harries commited on Oct 11, 2023
fix(doc): update default doc according to arg (#714) 5855dde unverified Nanobit commited on Oct 10, 2023
fix(doc): Add note on inference w sample packing (#712) 11c48c5 unverified Nanobit commited on Oct 10, 2023
Get qlora mistral-7b fine tuning working on a single 4090 (#708) 295b266 unverified lukemarsden commited on Oct 10, 2023
Merge pull request #693 from OpenAccess-AI-Collective/update-mistral-example 29b8f46 unverified mhenrichsen commited on Oct 7, 2023
Fix: Higher vram usage for mistral and sample_packing (#691) 669f1d0 unverified Nanobit commited on Oct 6, 2023
flash_attention + sample packing for stablelm 3b (#671) 2d60ba3 unverified winglian commited on Oct 5, 2023
Fix: ValueError when FA + Mistral when padding_side=right (#681) eb480df unverified Nanobit commited on Oct 5, 2023
Fix: Future deprecation warning with use_auth_token (#680) 69fac9a unverified Nanobit commited on Oct 5, 2023
Fix(tokenizer): Set rstrip,lstrip,norm to False (#678) e0b7eea unverified Nanobit commited on Oct 5, 2023
Fix(version): Update FA to work with Mistral SWA (#673) 43856c0 unverified Nanobit commited on Oct 4, 2023
Feat: Allow usage of native Mistral FA when no sample_packing (#669) 697c50d unverified Nanobit commited on Oct 4, 2023
Feat: Add config yaml to section for reprod in bug-report.yaml (#667) 90e0d67 unverified Nanobit commited on Oct 3, 2023
refactor to set eval_batch_size earlier if unset, so we can warn if mismatched (#662) 2642cae unverified winglian commited on Oct 3, 2023
prepared dataset caching, other misc fixes (#665) e50a64e unverified winglian commited on Oct 3, 2023
make sure we also run CI tests when requirements.txt changes (#663) f4868d7 unverified winglian commited on Oct 2, 2023
don't strip the prompt for check since we don't strip to tokenize anymore (#650) 8662e8f unverified winglian commited on Sep 28, 2023
fix for flash attn w mistral w/o sammple packing (#648) b2edaae unverified winglian commited on Sep 28, 2023