videos
list
annotations
list
tracks
list
images
list
info
dict
categories
list
licenses
sequence
[{"id":0,"width":640,"height":480,"neg_category_ids":[342,57,651,357,738],"not_exhaustive_category_i(...TRUNCATED)
[{"bbox":[114.0,166.0,67.0,71.0],"area":4757.0,"iscrowd":0,"id":0,"image_id":0,"category_id":95,"tra(...TRUNCATED)
[{"id":0,"category_id":95,"video_id":0},{"id":1,"category_id":805,"video_id":0},{"id":2,"category_id(...TRUNCATED)
[{"id":0,"video":"train/YFCC100M/v_f69ebe5b731d3e87c1a3992ee39c3b7e","_scale_task_id":"5de800eddb2c1(...TRUNCATED)
{"year":2020,"version":"0.1.20200120","description":"Annotations imported from Scale","contributor":(...TRUNCATED)
[{"frequency":"r","id":1,"synset":"acorn.n.01","image_count":0,"instance_count":0,"synonyms":["acorn(...TRUNCATED)
[ "Unknown" ]

TAO-Amodal Dataset

Official Source for Downloading the TAO-Amodal and TAO Dataset.

πŸ“™ Project Page | πŸ’» Code | πŸ“Ž Paper Link | ✏️ Citations

TAO-Amodal

Contact: πŸ™‹πŸ»β€β™‚οΈCheng-Yen (Wesley) Hsieh

Dataset Description

Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects. Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above). Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).

You can also find the annotations of TAO dataset in annotations folder.

Dataset Download

  1. Download with git:
git lfs install
git clone git@hf.co:datasets/chengyenhsieh/TAO-Amodal
from huggingface_hub import snapshot_download
snapshot_download(repo_id="chengyenhsieh/TAO-Amodal")
  1. Unzip all videos:

Modify dataset_root in unzip_video.py and run:

python unzip_video.py

πŸ“š Dataset Structure

The dataset should be structured like this:

   TAO-Amodal
    β”œβ”€β”€ frames
    β”‚    └── train
    β”‚       β”œβ”€β”€ ArgoVerse
    β”‚       β”œβ”€β”€ BDD
    β”‚       β”œβ”€β”€ Charades
    β”‚       β”œβ”€β”€ HACS
    β”‚       β”œβ”€β”€ LaSOT
    β”‚       └── YFCC100M
    β”œβ”€β”€ amodal_annotations
    β”‚    β”œβ”€β”€ train/validation/test.json
    β”‚    β”œβ”€β”€ train_lvis_v1.json
    β”‚    └── validation_lvis_v1.json
    β”œβ”€β”€ annotations (TAO annotations)
    β”‚    β”œβ”€β”€ train/validation.json
    β”‚    β”œβ”€β”€ train/validation_with_freeform.json
    β”‚    └── README.md
    β”œβ”€β”€ example_output
    β”‚    └── prediction.json
    β”œβ”€β”€ BURST_annotations
    β”‚    β”œβ”€β”€ train
    β”‚         └── train_visibility.json
    β”‚    ...

πŸ“š File Descriptions

File Name Description
train/validation/test.json Formal annotation files. We use these annotations for visualization. Categories include those in lvis v0.5 and freeform categories.
train_lvis_v1.json We use this file to train our amodal-expander, treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0.
validation_lvis_v1.json We use this file to evaluate our amodal-expander. Categories are aligned with those in lvis v1.0.
prediction.json Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our evaluation toolkit.
BURST_annotations/XXX.json Modal mask annotations from BURST dataset with our heuristic visibility attributes. We provide these files for the convenience of visualization

Annotation and Prediction Format

Our annotations are structured similarly as TAO with some modifications. Annotations:


Annotation file format:
{
    "info" : info,
    "images" : [image],
    "videos": [video],
    "tracks": [track],
    "annotations" : [annotation],
    "categories": [category],
    "licenses" : [license],
}
annotation: {
    "id": int,
    "image_id": int,
    "track_id": int,
    "bbox": [x,y,width,height],
    "area": float,

    # Redundant field for compatibility with COCO scripts
    "category_id": int,
    "video_id": int,

    # Other important attributes for evaluation on TAO-Amodal
    "amodal_bbox": [x,y,width,height],
    "amodal_is_uncertain": bool,
    "visibility": float, (0.~1.0)
}
image, info, video, track, category, licenses, : Same as TAO

Predictions should be structured as:

[{
    "image_id" : int,
    "category_id" : int,
    "bbox" : [x,y,width,height],
    "score" : float,
    "track_id": int,
    "video_id": int
}]

Refer to the instructions of TAO dataset for further details

πŸ“Ί Example Sequences

Check here for more examples and here for visualization code.

Citation

@article{hsieh2023tracking,
          title={Tracking any object amodally},
          author={Hsieh, Cheng-Yen and Khurana, Tarasha and Dave, Achal and Ramanan, Deva},
          journal={arXiv preprint arXiv:2312.12433},
          year={2023}
        }
Please also cite TAO and BURST dataset if you use our dataset
@inproceedings{dave2020tao,
  title={Tao: A large-scale benchmark for tracking any object},
  author={Dave, Achal and Khurana, Tarasha and Tokmakov, Pavel and Schmid, Cordelia and Ramanan, Deva},
  booktitle={Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part V 16},
  pages={436--454},
  year={2020},
  organization={Springer}
}

@inproceedings{athar2023burst,
title={Burst: A benchmark for unifying object recognition, segmentation and tracking in video},
author={Athar, Ali and Luiten, Jonathon and Voigtlaender, Paul and Khurana, Tarasha and Dave, Achal and Leibe, Bastian and Ramanan, Deva},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1674--1683},
year={2023}
}
Downloads last month
2,065
Edit dataset card