Add disclaimer
Browse files
README.md
CHANGED
@@ -11,6 +11,14 @@ inference: false
|
|
11 |
|
12 |
# MPT-7B-Instruct
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
MPT-7B-Instruct is a model for short-form instruction following.
|
15 |
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
|
16 |
* License: _CC-By-SA-3.0_
|
|
|
11 |
|
12 |
# MPT-7B-Instruct
|
13 |
|
14 |
+
This is the MPT-7B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-7B-Instruct.
|
15 |
+
|
16 |
+
I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
|
17 |
+
Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
|
18 |
+
https://huggingface.co/mosaicml/mpt-7b/discussions/42.
|
19 |
+
|
20 |
+
The original description for MosaicML team below:
|
21 |
+
|
22 |
MPT-7B-Instruct is a model for short-form instruction following.
|
23 |
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
|
24 |
* License: _CC-By-SA-3.0_
|