Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
class label
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
0test
End of preview.

Cloud-Adapter-Datasets

This dataset card aims to describe the datasets used in the Cloud-Adapter, a collection of high-resolution satellite images and semantic segmentation masks for cloud detection and related tasks.

Install

pip install huggingface-hub

Usage

# Step 1: Download datasets
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include hrc_whu.zip 
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include gf12ms_whu_gf1.zip
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include gf12ms_whu_gf2.zip
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include cloudsen12_high_l1c.zip
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include cloudsen12_high_l2a.zip
huggingface-cli download --repo-type dataset XavierJiezou/cloud-adapter-datasets --local-dir data --include l8_biome.zip

# Step 2: Extract datasets
unzip hrc_whu.zip -d hrc_whu
unzip gf12ms_whu_gf1.zip -d gf12ms_whu_gf1
unzip gf12ms_whu_gf2.zip -d gf12ms_whu_gf2
unzip cloudsen12_high_l1c.zip -d cloudsen12_high_l1c
unzip cloudsen12_high_l2a.zip -d cloudsen12_high_l2a
unzip l8_biome.zip -d l8_biome

Example

import os
import zipfile
from huggingface_hub import hf_hub_download

# Define the dataset repository
repo_id = "XavierJiezou/Cloud-Adapter"
# Select the zip file of the dataset to download
zip_files = [
    "hrc_whu.zip",
    # "gf12ms_whu_gf1.zip",
    # "gf12ms_whu_gf2.zip",
    # "cloudsen12_high_l1c.zip",
    # "cloudsen12_high_l2a.zip",
    # "l8_biome.zip",
]

# Define a directory to extract the datasets
output_dir = "cloud_adapter_paper_data"

# Ensure the output directory exists
os.makedirs(output_dir, exist_ok=True)

# Step 1: Download and extract each ZIP file
for zip_file in zip_files:
    print(f"Downloading {zip_file}...")
    # Download the ZIP file from Hugging Face Hub
    zip_path = hf_hub_download(repo_id=repo_id, filename=zip_file, repo_type="dataset")
    
    # Extract the ZIP file
    extract_path = os.path.join(output_dir, zip_file.replace(".zip", ""))
    with zipfile.ZipFile(zip_path, "r") as zip_ref:
        print(f"Extracting {zip_file} to {extract_path}...")
        zip_ref.extractall(extract_path)

# Step 2: Explore the extracted datasets
# Example: Load and display the contents of the "hrc_whu" dataset
dataset_path = os.path.join(output_dir, "hrc_whu")
train_images_path = os.path.join(dataset_path, "img_dir", "train")
train_annotations_path = os.path.join(dataset_path, "ann_dir", "train")

# Display some files in the training set
print("Training Images:", os.listdir(train_images_path)[:5])
print("Training Annotations:", os.listdir(train_annotations_path)[:5])

# Example: Load and display an image and its annotation
from PIL import Image

# Load an example image and annotation
image_path = os.path.join(train_images_path, os.listdir(train_images_path)[0])
annotation_path = os.path.join(train_annotations_path, os.listdir(train_annotations_path)[0])

# Open and display the image
image = Image.open(image_path)
annotation = Image.open(annotation_path)

print("Displaying the image...")
image.show()

print("Displaying the annotation...")
annotation.show()

Source Data

Citation

@article{hrc_whu,
title = {Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {150},
pages = {197-212},
year = {2019},
author = {Zhiwei Li and Huanfeng Shen and Qing Cheng and Yuhao Liu and Shucheng You and Zongyi He},
}

@article{gf12ms_whu,
  author={Zhu, Shaocong and Li, Zhiwei and Shen, Huanfeng},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={Transferring Deep Models for Cloud Detection in Multisensor Images via Weakly Supervised Learning}, 
  year={2024},
  volume={62},
  pages={1-18},
}

@article{cloudsen12_high,
  title={CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2},
  author={Aybar, Cesar and Ysuhuaylas, Luis and Loja, Jhomira and Gonzales, Karen and Herrera, Fernando and Bautista, Lesly and Yali, Roy and Flores, Angie and Diaz, Lissette and Cuenca, Nicole and others},
  journal={Scientific data},
  volume={9},
  number={1},
  pages={782},
  year={2022},
}

@article{l8_biome,
    title = {Cloud detection algorithm comparison and validation for operational Landsat data products},
    journal = {Remote Sensing of Environment},
    volume = {194},
    pages = {379-390},
    year = {2017},
    author = {Steve Foga and Pat L. Scaramuzza and Song Guo and Zhe Zhu and Ronald D. Dilley and Tim Beckmann and Gail L. Schmidt and John L. Dwyer and M. {Joseph Hughes} and Brady Laue}
}

Contact

For questions, please contact Xavier Jiezou at xuechaozou (at) foxmail (dot) com.

Downloads last month
81

Models trained or fine-tuned on XavierJiezou/cloud-adapter-datasets