Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
episode_index: int64
tasks: list<item: string>
length: int64
vs
codebase_version: string
dataset_name: string
robot_type: string
total_episodes: int64
total_frames: int64
total_tasks: int64
total_videos: int64
total_chunks: int64
chunks_size: int64
fps: double
splits: struct<train: string>
data_path: string
video_path: string
features: struct<action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.images.webcam: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: double, video.height: int64, video.width: int64, video.channels: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>, reward: struct<dtype: string, shape: list<item: int64>, names: null>, next.reward: struct<dtype: string, shape: list<item: int64>, names: null>, done: struct<dtype: string, shape: list<item: int64>, names: null>, next.done: struct<dtype: string, shape: list<item: int64>, names: null>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 527, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              episode_index: int64
              tasks: list<item: string>
              length: int64
              vs
              codebase_version: string
              dataset_name: string
              robot_type: string
              total_episodes: int64
              total_frames: int64
              total_tasks: int64
              total_videos: int64
              total_chunks: int64
              chunks_size: int64
              fps: double
              splits: struct<train: string>
              data_path: string
              video_path: string
              features: struct<action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.images.webcam: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: double, video.height: int64, video.width: int64, video.channels: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>, reward: struct<dtype: string, shape: list<item: int64>, names: null>, next.reward: struct<dtype: string, shape: list<item: int64>, names: null>, done: struct<dtype: string, shape: list<item: int64>, names: null>, next.done: struct<dtype: string, shape: list<item: int64>, names: null>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Buster Dual-Arm Robot Dataset

This dataset contains demonstration data from a Buster dual-arm robot system recorded in Isaac Sim.

Dataset Details

  • Robot Type: Buster Dual-Arm (2x UR arms + 2x Robotiq 3F grippers)
  • Total Episodes: 1
  • Total Frames: 352
  • FPS: 8.62
  • Video Resolution: 1280x720x3 (RGB)
  • State Dimensions: 34 joints
  • Action Dimensions: 34 joints

Robot Configuration

Arms

  • Arm 1: UR arm (6 DOF)
  • Arm 2: UR arm (6 DOF)

Grippers

  • Gripper 1: Robotiq 3F gripper (11 joints)
  • Gripper 2: Robotiq 3F gripper (11 joints)

Sensors

  • Base Camera: RGB camera providing visual observations
  • Joint States: Position feedback from all joints
  • Joint Commands: Action commands sent to robot

Data Format

This dataset follows the LeRobot v2.1 format:

  • State: Joint positions (34D vector)
  • Action: Joint commands (34D vector)
  • Video: RGB observations from base camera
  • Metadata: Task descriptions, episode info, statistics

Usage

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# Load the dataset
dataset = LeRobotDataset("noskiper-chwy/buster-dual-arm-demo")

# Access data
first_frame = dataset[0]
state = first_frame["observation.state"]
action = first_frame["action"]

Recording Environment

  • Simulator: Isaac Sim
  • Topics Recorded:
    • /joint_states - Robot state
    • /isaac_joint_commands - Robot commands
    • /base_camera_sensor/image_raw - Visual observations

Limitations

  • Single episode demonstration
  • No task descriptions (using "Unknown Task")
  • Modality configuration treats all joints as single arm (should be split for dual-arm)

Citation

If you use this dataset, please cite:

@dataset{buster_dual_arm_demo,
  title={Buster Dual-Arm Robot Demonstration Dataset},
  author={noskiper},
  year={2025},
  url={https://huggingface.co/datasets/noskiper-chwy/buster-dual-arm-demo}
}
Downloads last month
18