---
language:
- en
license:
- cc-by-nc-4.0
tags:
- object-centric learning
size_categories:
- 10K
Figure 2: Examples of images, depth maps, and segmentation maps of the dataset.
### Supported Tasks and Leaderboards - `object-centric learning`: The dataset can be used to train a model for [object-centric learning](https://arxiv.org/abs/2202.07135), which aims to learn compositional scene representations in an unsupervised manner. The segmentation performance of model is measured by Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI), and mean Intersection over Union (mIoU). Two variants of AMI and ARI are used to evaluate the segmentation performance more thoroughly. AMI-A and ARI-A are computed using pixels in the entire image and measure how accurately different layers of visual concepts (including both objects and the background) are separated. AMI-O and ARI-O are computed only using pixels in the regions of objects and focus on how accurately different objects are separated. The reconstruction performance of model is measured by Minimize Squared Error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS). Success on this task is typically measured by achieving high AMI, ARI, and mIOU and low MSE and LPIPS. ### Languages English. ## Dataset Structure We provide images of three different resolutions for each scene: 640x480, 256x256, and 128x128. The name of each image is in the form `[scene_id]_[frame_id].png`. They are available in `./640x480`, `./256x256`, and `./128x128`, respectively. The images are compressed using `tar` and the names of the compressed files start with the resolutions, e.g. `image_128x128_`. Please download all compressed files and use the `tar` command to decompress them. For example, for the 128x128 resolution images, please download all the scene files starting with `image_128x128_*` and then merge the files into `image_128x128.tar.gz`: ``` cat image_128x128_* > image_128x128.tar.gz ``` And then decompress the file: ``` tar xvzf image_128x128.tar.gz ``` ### Data Instances Each data instance contains an RGB image, its depth map, its camera intrinsic matrix, its camera pose, and its segmentation map, which is None in the training and validation set. ### Data Fields - `scene_id`: a string scene identifier for each example - `frame_id`: a string frame identifier for each example - `resolution`: a string for the image resolution of each example (e.g. 640x480, 256x256, 128x128) - `image`: a `PIL.Image.Image` object containing the image - `depth`: a `PIL.Image.Image` object containing the depth map - `segment`: a `PIL.Image.Image` object containing the segmentation map, where the int number in each pixel represents the index of the object (ranges from 1 to 10, with 0 representing the background). - `intrinsic_matrix`: a `numpy.ndarray` for the camera intrinsic matrix of each image - `camera_pose`: a `numpy.ndarray` for the camera pose of each image ### Data Splits The data is split into two subsets to create datasets with different levels of difficulty levels of difficulty. Both the two subsets are randomly divided into training, validation, and testing sets. The validation and testing sets each consist of 100 scenes, while the remaining scenes form the training set. Only the data in the testing set contain segmentation annotations for evaluation. OCTScenes-A contains 3200 scenes (`scene_id` from 0000 to 3199) and includes only the first 11 object types, with scenes consisting of 1 to 6 objects, making it comparatively smaller and less complex. Images with `scene_id` ranging from 0000 to 2999 are used for training, images with `scene_id` ranging from 3000 to 3099 are for validation, and images with `scene_id` ranging from 3100 to 3199 are for testing. OCTScenes-A contains 5000 scenes (`scene_id` from 0000 to 4999) and includes all 15 object types, with scenes consisting of 1 to 10 objects, resulting in a larger and more complex dataset. Images with `scene_id` ranging from 0000 to 4799 are used for training, images with `scene_id` ranging from 4800 to 4899 are for validation, and images with `scene_id` ranging from 4900 to 4999 are for testing.Dataset | OCTScenes-A | OCTScenes-B | ||||
---|---|---|---|---|---|---|
Resolution | 640x480 | 256x256 | 128x128 | 640x480 | 256x256 | 128x128 |
Split | train | validation | test | train | validation | test |
Number of scenes | 3000 | 100 | 100 | 4800 | 100 | 100 |
Number of object catergories | 11 | 15 | ||||
Number of objects in a scene | 1~6 | 1~10 | ||||
Number of views in a scene | 60 | 60 |