[Docs] Nit: Remind people to auth to wandb if they are going to use it (#1013) dec66d7 unverified hamel commited on Dec 29, 2023
remove landmark attn and xpos rope implementations (#1010) 70b46ca unverified winglian commited on Dec 28, 2023
Set eval_sample_packing to false in mistral config.yaml (#1003) 384b817 unverified Kevin Sydney commited on Dec 28, 2023
FEAT: add tagging support to axolotl (#1004) db9094d unverified Younes Belkada winglian commited on Dec 27, 2023
Add an example config for finetuning a 34B model on a 24GB GPU (#1000) 6ef46f8 unverified Evan Griffiths commited on Dec 25, 2023
set output_router_logits for mixtral config: (#995) 628b754 unverified winglian commited on Dec 22, 2023
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787) 1ffa386 unverified Nanobit commited on Dec 22, 2023
update transformers to fix checkpoint saving (#963) f28e755 unverified dumpmemory commited on Dec 16, 2023
fix: switch to using the HuggingFace Transformers NEFT implementation (#941) ef24342 unverified dg-kalle commited on Dec 13, 2023
fix: remove excessive newlines in system prompt(s) for alpaca (#936) 450e04d unverified dg-kalle commited on Dec 13, 2023
More hints on what to do with CUDA Out of memory errors (#925) b0cf397 unverified Juraj Bednar commited on Dec 13, 2023
new evals_per_epoch and saves_per_epoch to make things cleaner (#944) 5f79b82 unverified winglian commited on Dec 12, 2023
Respect sequence_len in config for `type: llama2_chat` (#926) f1de29d unverified hamel commited on Dec 12, 2023
Mixtral: More correct MoE, lower loss (#932) 86487c2 unverified casperhansen commited on Dec 10, 2023
update to latest transformers for mixstral support (#929) 35f9b0f unverified winglian commited on Dec 10, 2023
fixing prompt template of chatml by removal of linebreak (#922) 03c6318 unverified timlim123 Timothy Lim commited on Dec 9, 2023
fix(tokenizer): handle fast tokenizer properly for bos/eos (#914) fde091c unverified Nanobit commited on Dec 8, 2023
feat: add check for quantized model (#913) a581e9f unverified Nanobit winglian commited on Dec 4, 2023
Support device_map=sequential & max_memory config parameters (#903) 992e742 unverified Bryan Thornbury winglian commited on Dec 4, 2023
feature: loss watchdog for terminating training runs that are failing (#899) 58ec8b1 unverified user735 Karl-Johan Alm commited on Dec 4, 2023
Remove learning rate scheduler in deepspeed config to avoid conflict (#909) 476a205 unverified Haoxiang-Wang commited on Dec 4, 2023
ensure merged model matches the training dtype (#902) 1d21aa6 unverified winglian commited on Nov 29, 2023
Determine FSDP/deepspeed settings on device select. (#883) 71b7ea3 unverified user735 Karl-Johan Alm winglian commited on Nov 29, 2023
update datasets version to cut down the warnings due to pyarrow arg change (#897) 6a4562a unverified winglian commited on Nov 25, 2023