Datasets:

ArXiv:
License:
ericonaldo commited on
Commit
04b65a5
Β·
verified Β·
1 Parent(s): 7240143

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +208 -0
README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Dataset Card for ByteDance Robot Benchmark with 20 Tasks (BDRBench-20)
5
+
6
+ ## Table of Contents
7
+ - [Dataset Card for ByteDance Robot Benchmark with 20 Tasks (BDRBench-20)](#dataset-card-for-bytedance-robot-benchmark-with-20-tasks-bdrbench-20)
8
+ - [Table of Contents](#table-of-contents)
9
+ - [Dataset Description](#dataset-description)
10
+ - [Dataset Summary](#dataset-summary)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Annotation Structure](#annotation-structure)
13
+ - [Media Structure](#media-structure)
14
+ - [Data Splits](#data-splits)
15
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
16
+ - [Additional Information](#additional-information)
17
+ - [Licensing Information](#licensing-information)
18
+ - [Citation Information](#citation-information)
19
+ - [Contributions](#contributions)
20
+
21
+ ## Dataset Description
22
+
23
+ - **Homepage:** [RoboVLMs](robovlms.github.io), [GR-2](https://gr2-manipulation.github.io/)
24
+ - **Repository:** [RoboVLMs](https://github.com/Robot-VLAs/RoboVLMs)
25
+ - **Contact:** kongtao@bytedance.com
26
+
27
+ ### Dataset Summary
28
+
29
+ ByteDance Robot Benchmark (BDRBench-20) is a vision-language-action (VLA) dataset containing 8K high-quality trajectories. It was created to evaluate the performance of VLA models in real-world scenarios. This dataset includes 20 common manipulation tasks, such as pick-and-place, pouring, and open/close actions. The dataset is designed to be used for training and evaluating VLA models in real-world scenarios.
30
+
31
+ ### Dataset Structure
32
+
33
+ The dataset is divided into `train` and `val` sets, with two subdirectories: `anns` (annotations) and `media` (videos). The `anns` directory contains annotation files for each subtask, while the `media` directory contains rollout videos for each task.
34
+
35
+ For example, to collect a trajectory for the task "*pick up the cucumber from the cutting board; place the picked object in the vegetable basket*", the robots will be teleoperated to perform both the pick and place subtasks consecutively to improve efficiency. The rollout processes for these two subtasks are recorded in the same video, but their annotations are stored in separate files.
36
+
37
+ The detailed file structure is listed as follows:
38
+ ```bash
39
+ Dataset
40
+ β”œβ”€β”€ anns # text, video path, actions
41
+ β”‚ β”œβ”€β”€ train
42
+ β”‚ β”‚ β”œβ”€β”€ {id}.json
43
+ β”‚ β”‚ β”œβ”€β”€ ...
44
+ β”‚ β”œβ”€β”€ val
45
+ β”‚ β”‚ β”œβ”€β”€ {id}.json
46
+ β”‚ β”‚ β”œβ”€β”€ ...
47
+ β”œβ”€β”€ media # videos
48
+ β”‚ β”œβ”€β”€ train
49
+ β”‚ β”‚ β”œβ”€β”€ {id}
50
+ β”‚ β”‚ β”‚ β”œβ”€β”€ rgb.mp4
51
+ β”‚ β”‚ β”‚ β”œβ”€β”€ hand_rgb.mp4
52
+ β”‚ β”‚ β”œβ”€β”€ ...
53
+ ```
54
+ #### Annotation Structure
55
+
56
+ Here, we provide a detailed explanation of the meaning of each key in the annotation JSON file (in `./anns`).
57
+
58
+ 1) **"texts"**: This is a list containing a single string that describes the task in English.
59
+ Example: `["open the drawer"]`
60
+
61
+ 2) **"videos"**: This is a list containing two dictionaries. The first dictionary corresponds to the video recorded by the static camera, and the second corresponds to the wrist camera. For each dictionary, the following keys are used:
62
+ - `video_path`: The path to the video file.
63
+ - `start`: The starting frame of the task in the video.
64
+ - `end`: The ending frame of the task in the video.
65
+ - The first dictionary also contains an additional key, `crop`, which specifies the cropping area for the video. It is recommended to use this key to crop the video during training in order to reduce the impact of irrelevant backgrounds.
66
+
67
+ Example:
68
+ ```python
69
+ [
70
+ {
71
+ "video_path": "/media/val/0_5/rgb.mp4",
72
+ "crop": [[45,200], [705,1000]],
73
+ "start": 0,
74
+ "end": 124
75
+ },
76
+ {
77
+ "video_path": "/media/val/0_5/hand_rgb.mp4",
78
+ "start": 0,
79
+ "end": 124
80
+ }
81
+ ]
82
+ ```
83
+
84
+ 3) **"action"**: This is a list recording the action at every timestep, expressed in 7 dimensions: 3 for translation (x, y, z), 3 for Euler angles (rotation), and 1 for the gripper (open/close). Note that the action represents the changes in the relative state. Therefore, when using these data, you should also use relative states. That is, the state at timestep s<sub>t+1</sub> is expressed in the coordinate system of the end effector at timestep s<sub>t</sub>.
85
+
86
+ 4) **"state"**: Similar to "action", the state is described in 7 dimensions (3 for translation, 3 for Euler angles, and 1 for gripper open/close), but it is expressed in a global coordinate system. Since the data are collected from different machines with varying global coordinates, it is recommended to use relative states if you want to train your model and deploy it in a different environment using the state data.
87
+
88
+ Example code for calculating relative states:
89
+ ```python
90
+ # Example of how to get relative state
91
+ def _get_relative_states(self, label, frame_ids):
92
+ # Assume you have loaded the annotation file into 'label'
93
+ # 'frame_ids' indicates the indexes of the states you want to use
94
+ states = label['state']
95
+ first_id = frame_ids[0]
96
+ first_xyz = np.array(states[first_id][0:3])
97
+ first_rpy = np.array(states[first_id][3:6])
98
+ first_rotm = euler2rotm(first_rpy)
99
+ first_gripper = states[first_id][6]
100
+ first_state = np.zeros(7, dtype=np.float32)
101
+ first_state[-1] = first_gripper
102
+ rel_states = [first_state]
103
+ for k in range(1, len(frame_ids)):
104
+ curr_frame_id = frame_ids[k]
105
+ curr_xyz = np.array(states[curr_frame_id][0:3])
106
+ curr_rpy = np.array(states[curr_frame_id][3:6])
107
+ curr_rotm = euler2rotm(curr_rpy)
108
+ curr_rel_rotm = first_rotm.T @ curr_rotm
109
+ curr_rel_rpy = rotm2euler(curr_rel_rotm)
110
+ curr_rel_xyz = np.dot(first_rotm.T, curr_xyz - first_xyz)
111
+ curr_gripper = states[curr_frame_id][6]
112
+ curr_state = np.zeros(7, dtype=np.float32)
113
+ curr_state[0:3] = curr_rel_xyz
114
+ curr_state[3:6] = curr_rel_rpy
115
+ curr_state[-1] = curr_gripper
116
+ rel_states.append(curr_state)
117
+ return torch.from_numpy(np.array(rel_states))
118
+ ```
119
+
120
+ #### Media Structure
121
+
122
+ The `media` directory is used to store videos recorded by the static camera (`rgb.mp4`) and wrist camera (`hand_rgb.mp4`). These videos have been aligned frame by frame.
123
+
124
+ #### Data Splits
125
+
126
+ The data fields are consistent across the `train` and `val` splits. Below are their proportions:
127
+
128
+ | Name | Episodes | Samples |
129
+ |--------|----------|---------|
130
+ | train | 7440 | 1,170,490 |
131
+ | val | 638 | 97,985 |
132
+
133
+ Additionally, here is the number of trajectories for each task:
134
+
135
+ ```python
136
+ For train split:
137
+ {
138
+ "pick up the cucumber from the cutting board; place the picked object in the vegetable basket": 498,
139
+ "pick up the eggplant from the red plate; place the picked object on the table": 342,
140
+ "pick up the mandarin from the green plate; place the picked object on the table": 297,
141
+ "pick up the red mug from the rack; place the picked object on the table": 497,
142
+ "pick up the knife from the left of the white plate; place the picked object into the drawer": 261,
143
+ "pick up the black seasoning powder from the table; pour the black seasoning powder in the red bowl; place the picked object on the table": 385,
144
+ "pick up the eggplant from the green plate; place the picked object on the table": 248,
145
+ "pick up the potato from the vegetable basket; place the picked object on the cutting board": 496,
146
+ "pick up the green mug from the rack; place the picked object on the table": 496,
147
+ "pick up the potato from the cutting board; place the picked object in the vegetable basket": 500,
148
+ "pick up the mandarin from the green plate; place the picked object on the red plate": 66,
149
+ "pick up the cucumber from the vegetable basket; place the picked object on the cutting board": 498,
150
+ "pick up the knife from the right of the white plate; place the picked object into the drawer": 246,
151
+ "pick up the green bottle from the white box; place the picked object on the tray": 500,
152
+ "pick up the eggplant from the green plate; place the picked object on the red plate": 60,
153
+ "pick up the eggplant from the red plate; place the picked object on the green plate": 53,
154
+ "press the toaster switch": 499,
155
+ "open the oven": 500,
156
+ "close the oven": 498,
157
+ "open the drawer": 500
158
+ }
159
+ ```
160
+ ```python
161
+ For val split:
162
+ {
163
+ "pick up the green bottle from the white box;place the picked object on the tray": 94,
164
+ "pick up the red mug from the rack;place the picked object on the table": 30,
165
+ "pick up the mandarin from the green plate;place the picked object on the table": 28,
166
+ "pick up the black seasoning powder from the table;pour the black seasoning powder in the red bowl;place the picked object on the table": 31,
167
+ "pick up the cucumber from the cutting board;place the picked object in the vegetable basket": 41,
168
+ "pick up the cucumber from the vegetable basket;place the picked object on the cutting board": 38,
169
+ "pick up the potato from the cutting board;place the picked object in the vegetable basket": 41,
170
+ "pick up the eggplant from the green plate;place the picked object on the red plate": 5,
171
+ "pick up the eggplant from the red plate;place the picked object on the table": 26,
172
+ "pick up the potato from the vegetable basket;place the picked object on the cutting board": 40,
173
+ "pick up the green mug from the rack;place the picked object on the table": 29,
174
+ "pick up the knife from the left of the white plate;place the picked object into the drawer": 10,
175
+ "pick up the eggplant from the green plate;place the picked object on the table": 20,
176
+ "pick up the knife from the right of the white plate;place the picked object into the drawer": 11,
177
+ "pick up the eggplant from the red plate;place the picked object on the green plate": 2,
178
+ "pick up the mandarin from the green plate;place the picked object on the red plate": 4,
179
+ "open the drawer": 60,
180
+ "press the toaster switch": 16,
181
+ "close the oven": 55,
182
+ "open the oven": 57
183
+ }
184
+ ```
185
+
186
+ ### Personal and Sensitive Information
187
+
188
+ We do not find any personal or sensitive information in this benchmark.
189
+
190
+ ## Additional Information
191
+
192
+ ### Licensing Information
193
+
194
+ The BDRBench-20 is licensed under the [Apache License](https://www.apache.org/licenses/LICENSE-2.0).
195
+
196
+ ### Citation Information
197
+
198
+ ```
199
+ @article{li2023generalist,
200
+ title={Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models},
201
+ author={Li, Xinghang and Li, Peiyan and Liu, Minghuan and Wang, Dong and Liu, Jirong and Kang, Bingyi and Ma, Xiao and Kong, Tao and Zhang, Hanbo and Liu, Huaping},
202
+ journal={arXiv preprint arXiv:2412.xxxxx},
203
+ ear={2024}
204
+ }
205
+ ```
206
+
207
+ ### Contributions
208
+ This dataset is a co-work by all the members of the robotics research team at Bytedance Research.