Text Generation
Transformers
PyTorch
Safetensors
English
llama
text-generation-inference
Inference Endpoints
File size: 2,750 Bytes
06d2089
 
 
 
 
 
 
 
 
 
 
 
4e4e66b
06d2089
 
 
ef3b1e7
06d2089
 
 
 
 
 
4e4e66b
06d2089
 
 
 
 
 
 
 
fd847fc
06d2089
fd847fc
 
06d2089
fd847fc
 
06d2089
 
 
 
 
 
fd847fc
 
06d2089
 
 
 
 
 
 
 
 
 
 
 
8c2f4c8
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# Model Card for OpenBezoar-HH-RLHF-DPO

The OpenBezoar-HH-RLHF-DPO is an LLM that has been fine tuned for human preferences alignment using Direct Preference Optimization (DPO), on top of [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).

## Model Details

- Base Model: [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT)
- Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset
- Alignment Method: [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290)
- Epochs: 1

### Model Description

OpenBezoar-HH-RLHF-SFT is an LLM that is built upon the OpenLLaMA 3B v2 architecture. This model has been fine-tuned for human preferences alignment using DPO. Alignment has been performed on top of the [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model. For more information please refer to our paper.

### Model Sources

- **Repository:** [More Information Needed]
- **Paper :** [More Information Needed]

## Instruction Format

We follow a modified version of the Alpaca prompt template as shown below. It is important to utilize this template in order to obtain best responses for instruction related tasks.
```
### System:
Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
```

Notice that **no** end-of-sentence (eos) token is being appended.

*Note: The system prompt shown in the following figure is the one that the model has been trained on most of the time. However, you may attempt to use any other system prompt that is available in the [Orca](https://arxiv.org/abs/2306.02707) scheme.*

## Limitations

- The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops.
- Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will **refrain** from generating harmful content exclusively.
- Caution is urged against relying on this model for production or adjacent use-cases.

## Citation

If you find our work useful, please cite our paper as follows:

```
[More Information Needed]
```

## Model Authors

Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, and Yasiru Ratnayake