Cameron-Chen
commited on
Commit
•
9bbdaa2
1
Parent(s):
661981a
Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,40 @@ datasets:
|
|
6 |
language:
|
7 |
- en
|
8 |
---
|
9 |
-
This is a model released from the preprint: *[Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760)*. Please refer to our [repository](https://github.com/sail-sg/dice) for more details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
language:
|
7 |
- en
|
8 |
---
|
9 |
+
<!-- This is a model released from the preprint: *[Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760)*. Please refer to our [repository](https://github.com/sail-sg/dice) for more details. -->
|
10 |
+
|
11 |
+
# Llama-3-Base-8B-DICE-Iter2
|
12 |
+
|
13 |
+
This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 2, based on the [princeton-nlp/Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) architecture as the starting point.
|
14 |
+
|
15 |
+
<!-- We utilized the prompt sets extracted from [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). -->
|
16 |
+
|
17 |
+
## Links to Other Models
|
18 |
+
- [Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1)
|
19 |
+
- [Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2)
|
20 |
+
|
21 |
+
## Model Description
|
22 |
+
|
23 |
+
- Model type: An 8B parameter GPT-like model fine-tuned on synthetic datasets.
|
24 |
+
- Language(s) (NLP): Primarily English
|
25 |
+
- License: MIT
|
26 |
+
- Fine-tuned from model: princeton-nlp/Llama-3-Base-8B-SFT-DPO
|
27 |
+
|
28 |
+
## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
|
29 |
+
|
30 |
+
| Model | LC. Win Rate | Win Rate |
|
31 |
+
|-------------------------------------------|:------------:|:--------:|
|
32 |
+
|[Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) |18.20 |15.50
|
33 |
+
|[Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |25.08 |25.77
|
34 |
+
|[Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |**27.55** |**30.99**
|
35 |
+
|
36 |
+
## Citation
|
37 |
+
|
38 |
+
```bibtex
|
39 |
+
@article{chen2024bootstrapping,
|
40 |
+
title={Bootstrapping Language Models with DPO Implicit Rewards},
|
41 |
+
author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min},
|
42 |
+
journal={arXiv preprint arXiv:2406.09760},
|
43 |
+
year={2024}
|
44 |
+
}
|
45 |
+
```
|