Update README.md
Browse files
README.md
CHANGED
@@ -4,41 +4,80 @@ tags:
|
|
4 |
- Autonomous Driving
|
5 |
- Computer Vision
|
6 |
---
|
7 |
-
# Dataset
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
First, install `nuscenes-devkit` following NuScenes's repo tutorial, [Devkit setup section](https://github.com/nutonomy/nuscenes-devkit?tab=readme-ov-file#devkit-setup). The easiest way is install via pip:
|
19 |
```
|
20 |
pip install nuscenes-devkit
|
21 |
```
|
22 |
|
23 |
-
## Usage:
|
24 |
Import NuScenes devkit:
|
25 |
```
|
26 |
from nuscenes.nuscenes import NuScenes
|
27 |
```
|
28 |
|
29 |
-
|
|
|
30 |
```
|
31 |
# The "version" variable is the name of the folder holding all .json metadata tables.
|
32 |
location = 10
|
33 |
-
|
34 |
```
|
35 |
|
36 |
-
|
|
|
37 |
```
|
38 |
-
|
39 |
```
|
40 |
|
41 |
-
|
|
|
42 |
## Scene
|
43 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
44 |
```
|
@@ -76,7 +115,8 @@ Output:
|
|
76 |
- `intersection`: location index.
|
77 |
- `err_max`: maximum time difference (in millisecond) between camera images of a same frame in this scene.
|
78 |
|
79 |
-
|
|
|
80 |
## Sample
|
81 |
Get the first sample (frame) of one scene:
|
82 |
```
|
@@ -109,7 +149,8 @@ Output:
|
|
109 |
- `data`: dict of data tokens of this sample's sensor data.
|
110 |
- `anns`: empty as we do not have annotation data at this moment.
|
111 |
|
112 |
-
|
|
|
113 |
## Sample Data
|
114 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
115 |
```
|
@@ -174,7 +215,7 @@ array([[661.094568 , 0. , 370.6625195],
|
|
174 |
[ 0. , 0. , 1. ]]))
|
175 |
```
|
176 |
|
177 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/
|
178 |
|
179 |
---
|
180 |
### LiDAR Data
|
@@ -237,8 +278,7 @@ Output:
|
|
237 |
2.6000000e+01 7.5000000e+01]]
|
238 |
```
|
239 |
|
240 |
-
|
241 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/gxyTJM7Y45AWE9k54Q9ur.png)
|
242 |
|
243 |
|
244 |
---
|
@@ -330,7 +370,8 @@ CAM_FRONT_CENTER pose:
|
|
330 |
|
331 |
```
|
332 |
|
333 |
-
|
|
|
334 |
## LiDAR-Image projection
|
335 |
- Use NuScenes devkit's `render_pointcloud_in_image()` method.
|
336 |
- The first variable is a sample token.
|
@@ -345,4 +386,8 @@ nusc.render_pointcloud_in_image(my_sample['token'],
|
|
345 |
|
346 |
Output:
|
347 |
|
348 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/
|
|
|
|
|
|
|
|
|
|
4 |
- Autonomous Driving
|
5 |
- Computer Vision
|
6 |
---
|
7 |
+
# Open MARS Dataset
|
8 |
|
9 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/ooi8v0KOUhWYDbqbfLkVG.jpeg)
|
10 |
|
11 |
+
<br/>
|
12 |
|
13 |
+
## Welcome to the tutorial of Open MARS Dataset!
|
14 |
|
15 |
+
Our paper has been accepted on CVPR 2024 🎉🎉🎉
|
16 |
+
|
17 |
+
Checkout our [project website](https://ai4ce.github.io/MARS/) for demo videos.
|
18 |
+
Codes to reproduce the videos are available in `/visualize` folder of `main` branch.
|
19 |
+
|
20 |
+
<br/>
|
21 |
+
|
22 |
+
## Intro
|
23 |
+
### The MARS dataset is collected with a fleet of autonomous vehicles from [MayMobility](https://maymobility.com/).
|
24 |
+
|
25 |
+
Our dataset uses the same structure as the [NuScenes](https://www.nuscenes.org/nuscenes) Dataset:
|
26 |
+
|
27 |
+
- Multitraversal: each location is saved as one NuScenes object, and each traversal is one scene.
|
28 |
+
- Multiagent: the whole set is a NuScenes object, and each multiagent encounter is one scene.
|
29 |
|
30 |
+
<br/>
|
31 |
+
|
32 |
+
## Download
|
33 |
+
Both Multiagent and Multitraversal subsets are now available for [download on huggingface](https://huggingface.co/datasets/ai4ce/MARS).
|
34 |
+
|
35 |
+
<br/>
|
36 |
+
|
37 |
+
## Overview
|
38 |
+
This tutorial explains how the NuScenes structure works in our dataset, including how you may access a scene and query its samples of sensor data.
|
39 |
+
|
40 |
+
- [Devkit Initialization](#initialization)
|
41 |
+
- [Multitraversal](#load-multitraversal)
|
42 |
+
- [Multiagent](#load-multiagent)
|
43 |
+
- [Scene](#scene)
|
44 |
+
- [Sample](#sample)
|
45 |
+
- [Sample Data](#sample-data)
|
46 |
+
- [Camera](#camera-data)
|
47 |
+
- [LiDAR](#lidar-data)
|
48 |
+
- [IMU](#imu-data)
|
49 |
+
- [Ego & Sensor Pose](#vehicle-and-sensor-pose)
|
50 |
+
- [LiDAR-Image projection](#lidar-image-projection)
|
51 |
+
|
52 |
+
<br/>
|
53 |
+
|
54 |
+
## Initialization
|
55 |
First, install `nuscenes-devkit` following NuScenes's repo tutorial, [Devkit setup section](https://github.com/nutonomy/nuscenes-devkit?tab=readme-ov-file#devkit-setup). The easiest way is install via pip:
|
56 |
```
|
57 |
pip install nuscenes-devkit
|
58 |
```
|
59 |
|
|
|
60 |
Import NuScenes devkit:
|
61 |
```
|
62 |
from nuscenes.nuscenes import NuScenes
|
63 |
```
|
64 |
|
65 |
+
#### Load Multitraversal
|
66 |
+
loading data of location 10:
|
67 |
```
|
68 |
# The "version" variable is the name of the folder holding all .json metadata tables.
|
69 |
location = 10
|
70 |
+
nusc = NuScenes(version='v1.0', dataroot=f'/MARS_multitraversal/{location}', verbose=True)
|
71 |
```
|
72 |
|
73 |
+
#### Load Multiagent
|
74 |
+
loading data for the full set:
|
75 |
```
|
76 |
+
nusc = NuScenes(version='v1.0', dataroot=f'/MARS_multiagent', verbose=True)
|
77 |
```
|
78 |
|
79 |
+
<br/>
|
80 |
+
|
81 |
## Scene
|
82 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
83 |
```
|
|
|
115 |
- `intersection`: location index.
|
116 |
- `err_max`: maximum time difference (in millisecond) between camera images of a same frame in this scene.
|
117 |
|
118 |
+
<br/>
|
119 |
+
|
120 |
## Sample
|
121 |
Get the first sample (frame) of one scene:
|
122 |
```
|
|
|
149 |
- `data`: dict of data tokens of this sample's sensor data.
|
150 |
- `anns`: empty as we do not have annotation data at this moment.
|
151 |
|
152 |
+
<br/>
|
153 |
+
|
154 |
## Sample Data
|
155 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
156 |
```
|
|
|
215 |
[ 0. , 0. , 1. ]]))
|
216 |
```
|
217 |
|
218 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/EBo7WeD9JV1asBfbONTym.png)
|
219 |
|
220 |
---
|
221 |
### LiDAR Data
|
|
|
278 |
2.6000000e+01 7.5000000e+01]]
|
279 |
```
|
280 |
|
281 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/ZED1ba3r7qeBzkeNQK3oq.png)
|
|
|
282 |
|
283 |
|
284 |
---
|
|
|
370 |
|
371 |
```
|
372 |
|
373 |
+
<br/>
|
374 |
+
|
375 |
## LiDAR-Image projection
|
376 |
- Use NuScenes devkit's `render_pointcloud_in_image()` method.
|
377 |
- The first variable is a sample token.
|
|
|
386 |
|
387 |
Output:
|
388 |
|
389 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66651bd4e4be2069a695e5a1/zDrqBzfs6oV5ugVCsCQLL.png)
|
390 |
+
|
391 |
+
|
392 |
+
|
393 |
+
|