Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787) 1ffa386 unverified Nanobit commited on Dec 22, 2023
update transformers to fix checkpoint saving (#963) f28e755 unverified dumpmemory commited on Dec 16, 2023
fix: switch to using the HuggingFace Transformers NEFT implementation (#941) ef24342 unverified dg-kalle commited on Dec 13, 2023
fix: remove excessive newlines in system prompt(s) for alpaca (#936) 450e04d unverified dg-kalle commited on Dec 13, 2023
More hints on what to do with CUDA Out of memory errors (#925) b0cf397 unverified Juraj Bednar commited on Dec 13, 2023
new evals_per_epoch and saves_per_epoch to make things cleaner (#944) 5f79b82 unverified winglian commited on Dec 12, 2023
Respect sequence_len in config for `type: llama2_chat` (#926) f1de29d unverified hamel commited on Dec 12, 2023
Mixtral: More correct MoE, lower loss (#932) 86487c2 unverified casperhansen commited on Dec 10, 2023
update to latest transformers for mixstral support (#929) 35f9b0f unverified winglian commited on Dec 10, 2023
fixing prompt template of chatml by removal of linebreak (#922) 03c6318 unverified timlim123 Timothy Lim commited on Dec 9, 2023
fix(tokenizer): handle fast tokenizer properly for bos/eos (#914) fde091c unverified Nanobit commited on Dec 8, 2023
feat: add check for quantized model (#913) a581e9f unverified Nanobit winglian commited on Dec 4, 2023
Support device_map=sequential & max_memory config parameters (#903) 992e742 unverified Bryan Thornbury winglian commited on Dec 4, 2023
feature: loss watchdog for terminating training runs that are failing (#899) 58ec8b1 unverified user735 Karl-Johan Alm commited on Dec 4, 2023
Remove learning rate scheduler in deepspeed config to avoid conflict (#909) 476a205 unverified Haoxiang-Wang commited on Dec 4, 2023
ensure merged model matches the training dtype (#902) 1d21aa6 unverified winglian commited on Nov 29, 2023
Determine FSDP/deepspeed settings on device select. (#883) 71b7ea3 unverified user735 Karl-Johan Alm winglian commited on Nov 29, 2023
update datasets version to cut down the warnings due to pyarrow arg change (#897) 6a4562a unverified winglian commited on Nov 25, 2023
fix: warning should not show if eval_batch_size not provided (#896) 7ee3c4c unverified Nanobit commited on Nov 25, 2023
chore(doc): Add info on changing role in sharegpt (#886) 9fc29e0 unverified Nanobit commited on Nov 22, 2023
try #2: pin hf transformers and accelerate to latest release, don't reinstall pytorch (#867) 0de1457 unverified winglian commited on Nov 16, 2023