license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': croissant
'1': kitchen-counter
'2': serving-cup
'3': whiteboard
splits:
- name: train
num_bytes: 3760395
num_examples: 23
download_size: 3762376
dataset_size: 3760395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- robotics
- computer vision
pretty_name: Reachy Doing Things Image Dataset
size_categories:
- n<1K
Reachy Doing Things Dataset
Overview
The Reachy Doing Things Images Dataset consists of images captured from the perspective of the Reachy humanoid robot. These images were taken during teleoperation sessions, providing a unique view of the environment as perceived by the robot during manipulation tasks. The images were captured with a RGBD camera mounted on the shoulder of the robot.
Purpose
This dataset is primarily aimed at testing and validating the performance of vision tools integrated into the pollen-vision library. Currently, it serves as a validation suite for object detection, object segmentation, and image tagging algorithms.
While it is not intended for training models at this stage, it provides valuable real-world data to refine and improve these vision tools. The images in the dataset are not annotated.
Content
- Non annotated ego-centric images captured during teleoperation sessions with Reachy
Future Updates
While the dataset is currently limited in size, ongoing efforts are underway to expand its content. Future updates will include additional images and annotations, making it a more comprehensive resource for vision research and development.
Contact
For any inquiries or feedback regarding the dataset, please contact contact@pollen-robotics.com.