Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ BPO is a black-box alignment technique that differs from training-based methods
|
|
21 |
|
22 |
### Data
|
23 |
Prompt优化模型由隐含人类偏好特征的prompt优化对训练得到,数据集的详细信息在这里。
|
24 |
-
The Prompt Optimization Model is trained on prompt optimization pairs which contain human preference features. Detailed information on the dataset can be found [here](https://huggingface.co/datasets/
|
25 |
|
26 |
### Backbone Model
|
27 |
The prompt preference optimizer is built on `Llama-2-7b-chat-hf`.
|
|
|
21 |
|
22 |
### Data
|
23 |
Prompt优化模型由隐含人类偏好特征的prompt优化对训练得到,数据集的详细信息在这里。
|
24 |
+
The Prompt Optimization Model is trained on prompt optimization pairs which contain human preference features. Detailed information on the dataset can be found [here](https://huggingface.co/datasets/THUDM/BPO).
|
25 |
|
26 |
### Backbone Model
|
27 |
The prompt preference optimizer is built on `Llama-2-7b-chat-hf`.
|