chore(readme): update instruction to set config to load from cache (#1030) b31038a unverified Nanobit commited on Jan 3
use recommended setting for use_reentrant w gradient checkpointing (#1021) 4d2e842 unverified winglian commited on Jan 2
Fix: bf16 support for inference (#981) 3678a6c unverified Tazik Shahjahan winglian commited on Dec 29, 2023
[WandB] Push axolotl config to top level wandb files (#1014) 4f4d638 unverified hamel commited on Dec 29, 2023
feat: remove need to add load_in* during merge (#1017) f6ecf14 unverified Nanobit commited on Dec 29, 2023
[Docs] Nit: Remind people to auth to wandb if they are going to use it (#1013) dec66d7 unverified hamel commited on Dec 29, 2023
remove landmark attn and xpos rope implementations (#1010) 70b46ca unverified winglian commited on Dec 28, 2023
Set eval_sample_packing to false in mistral config.yaml (#1003) 384b817 unverified Kevin Sydney commited on Dec 28, 2023
FEAT: add tagging support to axolotl (#1004) db9094d unverified Younes Belkada winglian commited on Dec 27, 2023
Add an example config for finetuning a 34B model on a 24GB GPU (#1000) 6ef46f8 unverified Evan Griffiths commited on Dec 25, 2023
set output_router_logits for mixtral config: (#995) 628b754 unverified winglian commited on Dec 22, 2023
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787) 1ffa386 unverified Nanobit commited on Dec 22, 2023
update transformers to fix checkpoint saving (#963) f28e755 unverified dumpmemory commited on Dec 16, 2023
fix: switch to using the HuggingFace Transformers NEFT implementation (#941) ef24342 unverified dg-kalle commited on Dec 13, 2023
fix: remove excessive newlines in system prompt(s) for alpaca (#936) 450e04d unverified dg-kalle commited on Dec 13, 2023
More hints on what to do with CUDA Out of memory errors (#925) b0cf397 unverified Juraj Bednar commited on Dec 13, 2023
new evals_per_epoch and saves_per_epoch to make things cleaner (#944) 5f79b82 unverified winglian commited on Dec 12, 2023
Respect sequence_len in config for `type: llama2_chat` (#926) f1de29d unverified hamel commited on Dec 12, 2023
Mixtral: More correct MoE, lower loss (#932) 86487c2 unverified casperhansen commited on Dec 10, 2023
update to latest transformers for mixstral support (#929) 35f9b0f unverified winglian commited on Dec 10, 2023
fixing prompt template of chatml by removal of linebreak (#922) 03c6318 unverified timlim123 Timothy Lim commited on Dec 9, 2023
fix(tokenizer): handle fast tokenizer properly for bos/eos (#914) fde091c unverified Nanobit commited on Dec 8, 2023
feat: add check for quantized model (#913) a581e9f unverified Nanobit winglian commited on Dec 4, 2023