add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083) 78c5b19 unverified winglian commited on Jan 11
be more robust about checking embedding modules for lora finetunes (#1074) [skip ci] 0f10080 unverified winglian commited on Jan 10
Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787) 1ffa386 unverified Nanobit commited on Dec 22, 2023
Fix: Warn when fullfinetune without adapter (#770) 44c9d01 unverified Nanobit commited on Oct 22, 2023
Fix: eval table conflict with eval_sample_packing (#769) 9923b72 unverified Nanobit commited on Oct 22, 2023
Fix(cfg): Add validation for save_strategy and eval_strategy (#633) 383f88d unverified Nanobit commited on Sep 28, 2023
Fix: Fail bf16 check when running on cpu during merge (#631) cfbce02 unverified Nanobit commited on Sep 25, 2023
recommend padding when using sample packing (#531) 3437149 unverified winglian commited on Sep 6, 2023
Attention mask and position id fixes for packing (#285) 2bb0b78 unverified winglian commited on Aug 12, 2023
Update doc for grad_accu and add validation tests for batch size 3c71c8d Nanobit commited on May 31, 2023
new hf_use_auth_token setting so login to hf isn't required 1c33eb8 winglian commited on May 28, 2023