|
--- |
|
datasets: |
|
- ehartford/dolphin |
|
license: apache-2.0 |
|
--- |
|
|
|
**Base Model :** iamplus/mpt-30b-v2 |
|
|
|
**Tool :** MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry) |
|
|
|
**Dataset :** Entire flan1m-GPT4 dataset |
|
|
|
**Config yaml with Model Params :** https://huggingface.co/iamplus/mpt-30b-v3/blob/main/mpt-30b_orca.yaml |
|
|
|
***Description :*** **mosaicml/mpt-30b** -> Finetuning on (Entire flan3m-GPT3.5 dataset for 1 epoch) -> **iamplus/mpt-30b-v2** -> Finetuning on (Entire flan1m-GPT4 dataset for 1 epoch) -> **iamplus/mpt-30b-v3** |
|
|
|
**Prompt Format :** |
|
|
|
``` |
|
<system>: [system prompt] |
|
|
|
<human>: [question] |
|
|
|
<bot>: |
|
``` |