Search is not available for this dataset
image
imagewidth (px) 512
512
| label
class label 4
classes |
---|---|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
|
0images1
|
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)
Description: This repository contains the dataset for the D3PO method in this paper Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model. The d3po_dataset file pertains to the image distortion experiment of the anything-v5
model.
The text2img_dataset comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.
Source Code: The code used to generate this data can be found here.
Directory
d3po_dataset
- epoch1
- all_img
- *.png
- deformed_img
- *.png
- json
- data.json (required for training)
- prompt.json
- sample.pkl(required for training)
- all_img
- epoch2`
- ...
- epoch5
- epoch1
text2img_dataset:
- img
- data_*.json
- plot.ipynb
- prompt.txt
Citation
@article{yang2023using,
title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model},
author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu},
journal={arXiv preprint arXiv:2311.13231},
year={2023}
}
- Downloads last month
- 33