metadata
datasets:
- ehartford/dolphin
license: apache-2.0
Base Model : manojpreveen/mpt-30b-v4
Tool : MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry)
Dataset : Entire flan1m-GPT4 dataset
Config yaml with Model Params : https://huggingface.co/manojpreveen/mpt-30b-v5/blob/main/mpt-30b_v5.yaml
Description : mosaicml/mpt-30b -> Finetuning on (Entire flan3m-GPT3.5 dataset for 4 epochs) iamplus/mpt-30b-v4 -> Finetuning on (Entire flan1m-GPT4 dataset for 4 epochs) -> iamplus/mpt-30b-v5
Prompt Format :
<system>: [system prompt]
<human>: [question]
<bot>: