vitpose-base-simple / README.md
danelcsb's picture
Update README.md
42e0e5b verified
|
raw
history blame
10.7 kB
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: keypoint-detection
---
# Model Card for Model ID
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/ZuIwMdomy2_6aJ_JTE1Yd.png)
<!-- Provide a quick summary of what the model is/does. -->
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation and ViTPose+: Vision Transformer Foundation Model for Generic Body Pose Estimation. It obtains 81.1 AP on MS COCO Keypoint test-dev set.
## Model Details
Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for
pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm,
and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision
transformers as backbones to extract features for a given person instance and a
lightweight decoder for pose estimation. It can be scaled up from 100M to 1B
parameters by taking the advantages of the scalable model capacity and high
parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose
tasks. We also empirically demonstrate that the knowledge of large ViTPose models
can be easily transferred to small ones via a simple knowledge token. Experimental
results show that our basic ViTPose model outperforms representative methods
on the challenging MS COCO Keypoint Detection benchmark, while the largest
model sets a new state-of-the-art, i.e., 80.9 AP on the MS COCO test-dev set. The
code and models are available at https://github.com/ViTAE-Transformer/ViTPose
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sangbum Choi and Niels Rogge
- **Funded by [optional]:** ARC FL-170100117 and IH-180100002.
- **License:** apache-2.0
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ViTAE-Transformer/ViTPose
- **Paper [optional]:** https://arxiv.org/pdf/2204.12484
- **Demo [optional]:** https://huggingface.co/spaces?sort=trending&search=vitpose
## Uses
The ViTPose model, developed by the ViTAE-Transformer team, is primarily designed for pose estimation tasks. Here are some direct uses of the model:
Human Pose Estimation: The model can be used to estimate the poses of humans in images or videos. This involves identifying the locations of key body joints such as the head, shoulders, elbows, wrists, hips, knees, and ankles.
Action Recognition: By analyzing the poses over time, the model can help in recognizing various human actions and activities.
Surveillance: In security and surveillance applications, ViTPose can be used to monitor and analyze human behavior in public spaces or private premises.
Health and Fitness: The model can be utilized in fitness apps to track and analyze exercise poses, providing feedback on form and technique.
Gaming and Animation: ViTPose can be integrated into gaming and animation systems to create more realistic character movements and interactions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
In this paper, we propose a simple yet effective vision transformer baseline for pose estimation,
i.e., ViTPose. Despite no elaborate designs in structure, ViTPose obtains SOTA performance
on the MS COCO dataset. However, the potential of ViTPose is not fully explored with more
advanced technologies, such as complex decoders or FPN structures, which may further improve the
performance. Besides, although the ViTPose demonstrates exciting properties such as simplicity,
scalability, flexibility, and transferability, more research efforts could be made, e.g., exploring the
prompt-based tuning to demonstrate the flexibility of ViTPose further. In addition, we believe
ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
and face keypoint detection [21, 6]. We leave them as the future work.
## How to Get Started with the Model
Use the code below to get started with the model.
```
import numpy as np
import requests
import torch
from PIL import Image
from transformers import (
RTDetrForObjectDetection,
RTDetrImageProcessor,
VitPoseConfig,
VitPoseForPoseEstimation,
VitPoseImageProcessor,
)
url = "http://images.cocodataset.org/val2017/000000000139.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Stage 1. Run Object Detector (User can replace this object_detector part)
person_image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
inputs = person_image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = person_model(**inputs)
results = person_image_processor.post_process_object_detection(
outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
)
def pascal_voc_to_coco(bboxes: np.ndarray) -> np.ndarray:
"""
Converts bounding boxes from the Pascal VOC format to the COCO format.
In other words, converts from (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format
to (top_left_x, top_left_y, width, height).
Args:
bboxes (`np.ndarray` of shape `(batch_size, 4)):
Bounding boxes in Pascal VOC format.
Returns:
`np.ndarray` of shape `(batch_size, 4) in COCO format.
"""
bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 0]
bboxes[:, 3] = bboxes[:, 3] - bboxes[:, 1]
return bboxes
# Human label refers 0 index in COCO dataset
boxes = results[0]["boxes"][results[0]["labels"] == 0]
boxes = [pascal_voc_to_coco(boxes.cpu().numpy())]
# Stage 2. Run ViTPose
config = VitPoseConfig()
image_processor = VitPoseImageProcessor.from_pretrained("nielsr/vitpose-base-simple")
model = VitPoseForPoseEstimation.from_pretrained("nielsr/vitpose-base-simple")
pixel_values = image_processor(image, boxes=boxes, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = model(pixel_values)
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=boxes)[0]
for pose_result in pose_results:
print(pose_result)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Dataset details. We use MS COCO [28], AI Challenger [41], MPII [3], and CrowdPose [22] datasets
for training and evaluation. OCHuman [54] dataset is only involved in the evaluation stage to measure
the models’ performance in dealing with occluded people. The MS COCO dataset contains 118K
images and 150K human instances with at most 17 keypoint annotations each instance for training.
The dataset is under the CC-BY-4.0 license. MPII dataset is under the BSD license and contains
15K images and 22K human instances for training. There are at most 16 human keypoints for each
instance annotated in this dataset. AI Challenger is much bigger and contains over 200K training
images and 350 human instances, with at most 14 keypoints for each instance annotated. OCHuman
contains human instances with heavy occlusion and is just used for val and test set, which includes
4K images and 8K instances.
#### Training Hyperparameters
- **Training regime:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/Gj6gGcIGO3J5HD2MAB_4C.png)
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/rsCmn48SAvhi8xwJhX8h5.png)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
OCHuman val and test set. To evaluate the performance of human pose estimation models on the
human instances with heavy occlusion, we test the ViTPose variants and representative models on
the OCHuman val and test set with ground truth bounding boxes. We do not adopt extra human
detectors since not all human instances are annotated in the OCHuman datasets, where the human
detector will cause a lot of “false positive” bounding boxes and can not reflect the true ability of
pose estimation models. Specifically, the decoder head of ViTPose corresponding to the MS COCO
dataset is used, as the keypoint definitions are the same in MS COCO and OCHuman datasets.
MPII val set. We evaluate the performance of ViTPose and representative models on the MPII val
set with the ground truth bounding boxes. Following the default settings of MPII, we use PCKh
as metric for performance evaluation.
### Results
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/FcHVFdUmCuT2m0wzB8QSS.png)
### Model Architecture and Objective
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/kf3e1ifJkVtOMbISvmMsM.png)
#### Hardware
The models are trained on 8 A100 GPUs based on the mmpose codebase [11]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{xu2022vitposesimplevisiontransformer,
title={ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation},
author={Yufei Xu and Jing Zhang and Qiming Zhang and Dacheng Tao},
year={2022},
eprint={2204.12484},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2204.12484},
}