Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,12 @@ pipeline_tag: text-generation
|
|
9 |
Helpful chatbot finetuned from [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) with [Direct Preference Optimization](https://arxiv.org/abs/2305.18290). \
|
10 |
Dataset: [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses).
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
| HellaSwag | WinoGrande | BooLQ | ARC-c |
|
13 |
|:------:|:------:|:------:|:------:|
|
14 |
| 62.37% | 63.3% | 65.2% | 32.76% |
|
|
|
9 |
Helpful chatbot finetuned from [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) with [Direct Preference Optimization](https://arxiv.org/abs/2305.18290). \
|
10 |
Dataset: [Dahoas/instruct-synthetic-prompt-responses](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses).
|
11 |
|
12 |
+
The model was finetuned with the following promt: \
|
13 |
+
``"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "`` \
|
14 |
+
It should be benefical to use the same or a similar prompt for inference.
|
15 |
+
|
16 |
+
An increase of performance compared to [GPT4All-J v1.3](https://huggingface.co/nomic-ai/gpt4all-j) was observed when using two-shot Chain-of-Thought prompting.
|
17 |
+
|
18 |
| HellaSwag | WinoGrande | BooLQ | ARC-c |
|
19 |
|:------:|:------:|:------:|:------:|
|
20 |
| 62.37% | 63.3% | 65.2% | 32.76% |
|