--- license: cc-by-nc-sa-4.0 language: - en tags: - OpenSatMap - Satellite task_categories: - image-segmentation --- # OpenSatMap Dataset Card
## Description The dataset contains 3,787 high-resolution satellite images with fine-grained annotations, covering diverse geographic locations and popular driving datasets. It can be used for large-scale map construction and downstream tasks like autonomous driving. The images are collected from Google Maps at level 19 resolution (0.3m/pixel) and level 20 resolution (0.15m/pixel), we denote them as OpenSatMap19 and OpenSatMap20, respectively. For OpenSatMap19, the images are collected from 8 cities in China, including Beijing, Shanghai, Guangzhou, ShenZhen, Chengdu, Xi'an, Tianjin, and Shenyang. There are 1806 images in OpenSatMap19. For OpenSatMap20, the images are collected from 18 countries, more than 50 cities all over the world. There are 1981 images in OpenSatMap20. The figure below shows the sampling areas of the images in OpenSatMap.
For each image, we provide instance-level annotations and eight attributes for road structures, including lanes lines, curb and virtual lines. The instances in OpenSatMap images are annotated by experts in remote sensing and computer vision. We will continue to update the dataset, to grow in size and scope to reflect evolving real-world conditions. ## Image Source and Usage License The OpenSatMap images are collected from Google Maps. The dataset will be licensed under a Creative Commons CC-BY-NC-SA 4.0 license and the usage of the images must respect the Google Maps Terms of Service. ## Line Category and Attribute We use vectorized polylines to represent a line instance. We first categorize all lines into three categories: curb, lane line, and virtual line. A curb is the boundary of a road. Lane lines are those visible lines forming the lanes. A virtual line means that there is no lane line or curb here, but logically there should be a boundary to form a full lane. Please refer to the figure below for examples of these three categories. For each line instance, we provide eight attributes: **color, line type,number of lines, function, bidirection, boundary, shaded, clearness**. Specifically, they are: - Color: The color of the line. It can be white, yellow, others or none. - Line type: The type of the line. It can be solid, thick solid, dashed, short dashed dotted, others or none. - Number of lines: The number of the line. It can be single, double, others or none. - Function: The function of the line. It can be Chevron markings, no parking, deceleration line, bus lane, tidal line, parking space, vehicle staging area, guide line, changable line, lane-borrowing line, others or none. - Bidirection: Whether the line is bidirectional. It can be true or false. - Boundary: Whether the line is a boundary. It can be true or false. - Shaded: The degree of occlusion. It can be no, minor or major. - Clearness: The clearness of the line. It can be clear or fuzzy. Note that there is no man-made visible line on curbs and virtual lines, so we annotate their colors, line types, numbers of lines, and functions as none.
## Annotation Format The annotations are stored in JSON format. Each image is annotated with "image_width", "image_height", and a list of "lines" where the elements are line instances. Each line is annotated with "category", "points", "color", "line_type", "line_num", "function", "bidirection", "boundary", "shaded", and "clearness". ``` {"img_name": { "image_width": int, "image_height": int, "lines": [ { "category": str, "points": [ [float, float], [float, float], [float, float], ... ], "color": str, "line_type": str, "line_num": str, "function": str, "bidirection": bool, "boundary": bool, "shaded": str, "clearness": bool }, { "category": str, "points": [ [float, float], [float, float], [float, float], ... ], "color": str, "line_type": str, "line_num": str, "function": str, "bidirection": bool, "boundary": bool, "shaded": str, "clearness": bool }, ... ] } } ``` ## Meta data The meta data of GPS coordinates and image acquisition time are also provided. The meta data is stored in a JSON file. Image names are keys and values are the tiles we used in each images. Please refer to [get_google_maps_image](https://github.com/bjzhb666/get_google_maps_image) for more details. We can use the meta data to calculate the center of a picture and the code will be released in [Code (We will release all the codes as soon as possible)](https://github.com/OpenSatMap/OpenSatMap-offical). ``` { "img_name": [ { "centerGPS": [float, float], "centerWorld": [float, float], "filename": str }, { "centerGPS": [float, float], "centerWorld": [float, float], "filename": str }, ... ] ... } ``` ## Paper or resources for more information: [Paper](https://arxiv.org/abs/2410.23278), [Project](https://opensatmap.github.io/), [Code (We will release all the codes as soon as possible)](https://github.com/OpenSatMap/OpenSatMap-offical) ## Intended use ### Task 1: Instance-level Line Detection The aim of this task is to extract road structures from satellite images at the instance level. For each instance, we use polylines as the vectorized representation and pixel-level masks as the rasterized representation.
### Task 2: Satellite-enhanced Online Map Construction We use satellite images to enhance online map construction for autonomous driving. Inputs are carema images of an autonomous vehicle and satellite images of the same area and outputs are vectorized map elements around the vehicle.
**Alignment with driving benchmark (nuScenes)**
## Citation ``` @article{zhao2024opensatmap, title={OpenSatMap: A Fine-grained High-resolution Satellite Dataset for Large-scale Map Construction}, author={Zhao, Hongbo and Fan, Lue and Chen, Yuntao and Wang, Haochen and Jin, Xiaojuan and Zhang, Yixin and Meng, Gaofeng and Zhang, Zhaoxiang}, journal={arXiv preprint arXiv:2410.23278}, year={2024} } ```