mpt-7b-wizardlm / README.md
hugging2021's picture
Duplicate from openaccess-ai-collective/mpt-7b-wizardlm
9121146 verified
|
raw
history blame contribute delete
No virus
225 Bytes
metadata
datasets:
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
  - en

WizardLM finetuned on the MPT-7B model

Trained 3 epochs on 1 x A100 80GB

https://wandb.ai/wing-lian/mpt-wizard-7b/runs/2agnd9fz