Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints

Question answering model finetuned from GPT4All-J v1.3 with Direct Preference Optimization.
Dataset: Dahoas/instruct-synthetic-prompt-responses.

The model was finetuned with the following promt:
"Answer the following question in context:\n\nQuestion: " + samples["prompt"] + " Answer: "
It should be benefical to use the same or a similar prompt for inference.

An increase in performance compared to GPT4All-J v1.3 was observed when using two-shot Chain-of-Thought prompting.

HellaSwag WinoGrande BooLQ ARC-c
62.37% 63.3% 65.2% 32.76%
Downloads last month
1
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train Z3R6X/gpt4all_dpo_instruct