Add docker advanced instruction to README (#792) 2e71ff0 unverified gordicaleksa commited on Oct 27, 2023
Threaded MultipackDistributedDataloader with prefetched samples (#759) 05bd6f1 unverified casperhansen commited on Oct 26, 2023
chore(readme): Improve documentation on conversation field (#782) 20aa4b5 unverified Nanobit commited on Oct 24, 2023
refactor setup trainer so we can add more hooks (#773) 6c81c61 unverified winglian commited on Oct 23, 2023
disable eval table w sample packing in examples (#778) 9b43e7e unverified winglian commited on Oct 23, 2023
simplify by removing duplicate base_model_config (#772) 2d8def6 unverified winglian commited on Oct 23, 2023
Fix: Warn when fullfinetune without adapter (#770) 44c9d01 unverified Nanobit commited on Oct 22, 2023
convert exponential notation lr to floats (#771) ca84cca unverified winglian commited on Oct 22, 2023
Fix: eval table conflict with eval_sample_packing (#769) 9923b72 unverified Nanobit commited on Oct 22, 2023
chore: bump transformers to v4.34.1 to fix tokenizer issue (#745) 8966a6f unverified Nanobit commited on Oct 20, 2023
add a latest tag for regular axolotl image, cleanup extraneous print statement (#746) 70157cc unverified winglian commited on Oct 19, 2023
improve: Enhance code readability of prompt_tokenizers.py (#707) 3a99495 unverified seungduk commited on Oct 19, 2023
Fix(model): Linear detected and added to target module with rope linear (#738) 440c3ab unverified Nanobit commited on Oct 19, 2023
catch ConnectionError when checking dataset from HuggingFace (#743) 992d57f unverified Napuh commited on Oct 19, 2023
Mistral: Sliding Window Attention with Flash Attention and Sample Packing (#732) a045db0 unverified casperhansen winglian commited on Oct 16, 2023
fixes for alpaca w chatml, and don't include attention_mask w mistral for flash attention (#728) 3553172 unverified winglian commited on Oct 14, 2023
tweak for xformers install w pytorch 2.1.0 (#727) 7f2027d unverified winglian commited on Oct 13, 2023
workaround for installing xformers w torch 2.1.0 (#725) 8d288a2 unverified winglian commited on Oct 13, 2023
fix pytorch 2.1.0 build, add multipack docs (#722) 2aa1f71 unverified winglian commited on Oct 13, 2023
improve handling of the prepared ds path and other cfg defaults (#701) 1c412c7 unverified winglian commited on Oct 13, 2023
Save Axolotl config as WandB artifact (#716) 490923f unverified Jan Philipp Harries commited on Oct 11, 2023
fix(doc): update default doc according to arg (#714) 5855dde unverified Nanobit commited on Oct 10, 2023
fix(doc): Add note on inference w sample packing (#712) 11c48c5 unverified Nanobit commited on Oct 10, 2023
Get qlora mistral-7b fine tuning working on a single 4090 (#708) 295b266 unverified lukemarsden commited on Oct 10, 2023
Merge pull request #693 from OpenAccess-AI-Collective/update-mistral-example 29b8f46 unverified mhenrichsen commited on Oct 7, 2023
Fix: Higher vram usage for mistral and sample_packing (#691) 669f1d0 unverified Nanobit commited on Oct 6, 2023
flash_attention + sample packing for stablelm 3b (#671) 2d60ba3 unverified winglian commited on Oct 5, 2023