File size: 2,302 Bytes
bf79e40 0a7e27a bf79e40 0a7e27a bf79e40 0a7e27a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
tags:
- finetuned
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- instruct
- text-generation
- conversational
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mistral-7B-DPO
results: []
datasets:
- teknium/OpenHermes-2.5
license: apache-2.0
language:
- en
quantized_by: Suparious
pipeline_tag: text-generation
model_creator: NousResearch
model_name: Nous Hermes 2 - Mistral 7B - DPO
inference: false
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
---
# Nous Hermes 2 - Mistral 7B - DPO
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [WestLake 7B v2](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
```plaintext
@misc{Nous-Hermes-2-Mistral-7B-DPO,
url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)},
title={Nous Hermes 2 Mistral 7B DPO},
author={"Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/PDleZIZK3vE3ATfXRRySv.png)
## Model Description
Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.
The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
## Thank you to FluidStack for sponsoring compute for this model
|