---
license: mit
task_categories:
- text-to-image
language:
- en
configs:
- config_name: default
data_files:
- split: combinations_common
path:
- combinations_layouts_common.json
- split: combinations_uncommon
path:
- combinations_layouts_uncommon.json
- split: count_2_4
path:
- count_layouts_2_4.json
- split: count_5_7
path:
- count_layouts_5_7.json
- split: count_8_10
path:
- count_layouts_8_10.json
- split: position_boundary
path:
- position_layouts_boundary.json
- split: position_center
path:
- position_layouts_center.json
- split: size_large
path:
- size_layouts_large.json
- split: size_small
path:
- size_layouts_small.json
pretty_name: LayoutBench-COCO
---
# LayoutBench-COCO
Release of LayoutBench-COCO dataset from [Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation (CVPR 2024 Workshop)](https://layoutbench.github.io/)
See also [LayoutBench](https://huggingface.co/datasets/j-min/layoutbench) for fine-grained evaluation on OOD layouts with CLEVR objects.
[[Project Page](https://layoutbench.github.io/)]
[[Paper](https://arxiv.org/abs/2304.06671)]
Authors:
[Jaemin Cho](https://j-min.io),
[Linjie Li](https://www.microsoft.com/en-us/research/people/linjli/),
[Zhengyuan Yang](https://zyang-ur.github.io/),
[Zhe Gan](https://zhegan27.github.io/),
[Lijuan Wang](https://www.microsoft.com/en-us/research/people/lijuanw/),
[Mohit Bansal](https://www.cs.unc.edu/~mbansal/)
## Examples layout inputs in 4 skills
## Summary
LayoutBench-COCO is a diagnostic benchmark that examines layout-guided image generation models on arbitrary, unseen layouts.
Unlike [LayoutBench](https://huggingface.co/datasets/j-min/layoutbench), LayoutBench-COCO consists of OOD layouts of real objects and suports zero-shot evaluation.
LayoutBench-COCO measures 4 skills (Number, Position, Size, Combination), whose objects are from [MS COCO](https://cocodataset.org/).
The new 'combination’ split consists of layouts with two objects in different spatial relations, and the remaining three splits are similar to those of LayoutBench.
## Skill Details
### Skill 1: Number.
We define two layouts for 2∼10 objects and use 40 COCO objects, resulting in 720 total layouts (= 2×9×40). We name the layouts with 2∼4 and 8∼10 objects as few and many splits. The layouts are paired with captions with a template `“a photo of [N] [objects]”`.
### Skill 2: Position.
For each of boundary and center splits, we define four layouts with 40 COCO objects, resulting in 320 total layouts (= 2 × 4 × 40). The layouts are paired with captions with a template `“a photo of [N] [objects]”`.
### Skill 3: Size.
For each of tiny and large splits, we define nine layouts with 40 COCO objects, resulting in 720 total layouts (= 9×2×40). The layouts are paired with captions with a template `“a photo of [N] [objects]”`.
### Skill 4: Combination.
This skill measures whether a model can generate two objects that commonly or uncommonly appear in the real world. For each of the three spatial relations (holding, next to, sitting on), we define three layouts without specifying objects. For each of the three relations, we manually define 20 object pairs of COCO objects for common and uncommon splits. For example, (1) ‘person sitting on chair’ is more common than (2) ‘elephant sitting on banana’ in real life. This results in 360 total layouts (= 2×3×3×20). The layouts are paired with captions with a template `“[objA] [relation] [objB]”`.
# Use of LayoutBench-COCO
Step 1. Download LayoutBench-COCO and Generate Images from Layouts
Please see [Step 1 README](https://github.com/j-min/LayoutBench-COCO/blob/main/image_generation/README.md) for instructions on downloading LayoutBench-COCO and generating images from layouts.
Step 2. Run Evaluation
Please see [Step 2 README](https://github.com/j-min/LayoutBench-COCO/blob/main/yolov7/README.md) for instructions on running evaluation on LayoutBench-COCO.
## Citation
```bibtex
@inproceedings{Cho2024LayoutBench,
author = {Jaemin Cho and Linjie Li and Zhengyuan Yang and Zhe Gan and Lijuan Wang and Mohit Bansal},
title = {Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image Generation},
booktitle = {The First Workshop on the Evaluation of Generative Foundation Models},
year = {2024},
}
```