Update README.md
Browse files
README.md
CHANGED
@@ -38,9 +38,6 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
38 |
|
39 |
- **Developed by:** Sangbum Choi and Niels Rogge
|
40 |
- **Funded by [optional]:** ARC FL-170100117 and IH-180100002.
|
41 |
-
- **Shared by [optional]:** [More Information Needed]
|
42 |
-
- **Model type:** [More Information Needed]
|
43 |
-
- **Language(s) (NLP):** [More Information Needed]
|
44 |
- **License:** apache-2.0
|
45 |
- **Finetuned from model [optional]:** [More Information Needed]
|
46 |
|
@@ -50,29 +47,22 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
50 |
|
51 |
- **Repository:** https://github.com/ViTAE-Transformer/ViTPose
|
52 |
- **Paper [optional]:** https://arxiv.org/pdf/2204.12484
|
53 |
-
- **Demo [optional]:**
|
54 |
|
55 |
## Uses
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
60 |
|
61 |
-
|
62 |
|
63 |
-
|
64 |
|
65 |
-
|
66 |
|
67 |
-
|
68 |
|
69 |
-
[More Information Needed]
|
70 |
-
|
71 |
-
### Out-of-Scope Use
|
72 |
-
|
73 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
74 |
-
|
75 |
-
[More Information Needed]
|
76 |
|
77 |
## Bias, Risks, and Limitations
|
78 |
|
@@ -88,12 +78,6 @@ prompt-based tuning to demonstrate the flexibility of ViTPose further. In additi
|
|
88 |
ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
|
89 |
and face keypoint detection [21, 6]. We leave them as the future work.
|
90 |
|
91 |
-
### Recommendations
|
92 |
-
|
93 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
94 |
-
|
95 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
96 |
-
|
97 |
## How to Get Started with the Model
|
98 |
|
99 |
Use the code below to get started with the model.
|
@@ -173,15 +157,16 @@ for pose_result in pose_results:
|
|
173 |
|
174 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
175 |
|
176 |
-
[
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
|
|
185 |
|
186 |
|
187 |
#### Training Hyperparameters
|
@@ -199,69 +184,31 @@ for pose_result in pose_results:
|
|
199 |
|
200 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
201 |
|
202 |
-
|
203 |
-
|
204 |
-
|
|
|
|
|
|
|
|
|
205 |
|
206 |
-
|
207 |
-
|
208 |
-
|
209 |
-
|
210 |
-
#### Factors
|
211 |
-
|
212 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
213 |
-
|
214 |
-
[More Information Needed]
|
215 |
-
|
216 |
-
#### Metrics
|
217 |
-
|
218 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
219 |
-
|
220 |
-
[More Information Needed]
|
221 |
|
222 |
### Results
|
223 |
|
224 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/FcHVFdUmCuT2m0wzB8QSS.png)
|
225 |
|
226 |
-
#### Summary
|
227 |
-
|
228 |
-
|
229 |
-
|
230 |
-
## Model Examination [optional]
|
231 |
-
|
232 |
-
<!-- Relevant interpretability work for the model goes here -->
|
233 |
-
|
234 |
-
[More Information Needed]
|
235 |
-
|
236 |
-
## Environmental Impact
|
237 |
-
|
238 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
239 |
-
|
240 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
241 |
-
|
242 |
-
- **Hardware Type:** [More Information Needed]
|
243 |
-
- **Hours used:** [More Information Needed]
|
244 |
-
- **Cloud Provider:** [More Information Needed]
|
245 |
-
- **Compute Region:** [More Information Needed]
|
246 |
-
- **Carbon Emitted:** [More Information Needed]
|
247 |
-
|
248 |
-
## Technical Specifications [optional]
|
249 |
|
250 |
### Model Architecture and Objective
|
251 |
|
252 |
-
[
|
253 |
-
|
254 |
-
### Compute Infrastructure
|
255 |
-
|
256 |
-
[More Information Needed]
|
257 |
|
258 |
#### Hardware
|
259 |
|
260 |
The models are trained on 8 A100 GPUs based on the mmpose codebase [11]
|
261 |
|
262 |
-
#### Software
|
263 |
-
|
264 |
-
[More Information Needed]
|
265 |
|
266 |
## Citation [optional]
|
267 |
|
@@ -277,26 +224,4 @@ The models are trained on 8 A100 GPUs based on the mmpose codebase [11]
|
|
277 |
archivePrefix={arXiv},
|
278 |
primaryClass={cs.CV},
|
279 |
url={https://arxiv.org/abs/2204.12484},
|
280 |
-
}
|
281 |
-
|
282 |
-
**APA:**
|
283 |
-
|
284 |
-
[More Information Needed]
|
285 |
-
|
286 |
-
## Glossary [optional]
|
287 |
-
|
288 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
289 |
-
|
290 |
-
[More Information Needed]
|
291 |
-
|
292 |
-
## More Information [optional]
|
293 |
-
|
294 |
-
[More Information Needed]
|
295 |
-
|
296 |
-
## Model Card Authors [optional]
|
297 |
-
|
298 |
-
[More Information Needed]
|
299 |
-
|
300 |
-
## Model Card Contact
|
301 |
-
|
302 |
-
[More Information Needed]
|
|
|
38 |
|
39 |
- **Developed by:** Sangbum Choi and Niels Rogge
|
40 |
- **Funded by [optional]:** ARC FL-170100117 and IH-180100002.
|
|
|
|
|
|
|
41 |
- **License:** apache-2.0
|
42 |
- **Finetuned from model [optional]:** [More Information Needed]
|
43 |
|
|
|
47 |
|
48 |
- **Repository:** https://github.com/ViTAE-Transformer/ViTPose
|
49 |
- **Paper [optional]:** https://arxiv.org/pdf/2204.12484
|
50 |
+
- **Demo [optional]:** https://huggingface.co/spaces?sort=trending&search=vitpose
|
51 |
|
52 |
## Uses
|
53 |
|
54 |
+
The ViTPose model, developed by the ViTAE-Transformer team, is primarily designed for pose estimation tasks. Here are some direct uses of the model:
|
55 |
|
56 |
+
Human Pose Estimation: The model can be used to estimate the poses of humans in images or videos. This involves identifying the locations of key body joints such as the head, shoulders, elbows, wrists, hips, knees, and ankles.
|
57 |
|
58 |
+
Action Recognition: By analyzing the poses over time, the model can help in recognizing various human actions and activities.
|
59 |
|
60 |
+
Surveillance: In security and surveillance applications, ViTPose can be used to monitor and analyze human behavior in public spaces or private premises.
|
61 |
|
62 |
+
Health and Fitness: The model can be utilized in fitness apps to track and analyze exercise poses, providing feedback on form and technique.
|
63 |
|
64 |
+
Gaming and Animation: ViTPose can be integrated into gaming and animation systems to create more realistic character movements and interactions.
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
## Bias, Risks, and Limitations
|
68 |
|
|
|
78 |
ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
|
79 |
and face keypoint detection [21, 6]. We leave them as the future work.
|
80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
## How to Get Started with the Model
|
82 |
|
83 |
Use the code below to get started with the model.
|
|
|
157 |
|
158 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
159 |
|
160 |
+
Dataset details. We use MS COCO [28], AI Challenger [41], MPII [3], and CrowdPose [22] datasets
|
161 |
+
for training and evaluation. OCHuman [54] dataset is only involved in the evaluation stage to measure
|
162 |
+
the models’ performance in dealing with occluded people. The MS COCO dataset contains 118K
|
163 |
+
images and 150K human instances with at most 17 keypoint annotations each instance for training.
|
164 |
+
The dataset is under the CC-BY-4.0 license. MPII dataset is under the BSD license and contains
|
165 |
+
15K images and 22K human instances for training. There are at most 16 human keypoints for each
|
166 |
+
instance annotated in this dataset. AI Challenger is much bigger and contains over 200K training
|
167 |
+
images and 350 human instances, with at most 14 keypoints for each instance annotated. OCHuman
|
168 |
+
contains human instances with heavy occlusion and is just used for val and test set, which includes
|
169 |
+
4K images and 8K instances.
|
170 |
|
171 |
|
172 |
#### Training Hyperparameters
|
|
|
184 |
|
185 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
186 |
|
187 |
+
OCHuman val and test set. To evaluate the performance of human pose estimation models on the
|
188 |
+
human instances with heavy occlusion, we test the ViTPose variants and representative models on
|
189 |
+
the OCHuman val and test set with ground truth bounding boxes. We do not adopt extra human
|
190 |
+
detectors since not all human instances are annotated in the OCHuman datasets, where the human
|
191 |
+
detector will cause a lot of “false positive” bounding boxes and can not reflect the true ability of
|
192 |
+
pose estimation models. Specifically, the decoder head of ViTPose corresponding to the MS COCO
|
193 |
+
dataset is used, as the keypoint definitions are the same in MS COCO and OCHuman datasets.
|
194 |
|
195 |
+
MPII val set. We evaluate the performance of ViTPose and representative models on the MPII val
|
196 |
+
set with the ground truth bounding boxes. Following the default settings of MPII, we use PCKh
|
197 |
+
as metric for performance evaluation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
|
199 |
### Results
|
200 |
|
201 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/FcHVFdUmCuT2m0wzB8QSS.png)
|
202 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
203 |
|
204 |
### Model Architecture and Objective
|
205 |
|
206 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/kf3e1ifJkVtOMbISvmMsM.png)
|
|
|
|
|
|
|
|
|
207 |
|
208 |
#### Hardware
|
209 |
|
210 |
The models are trained on 8 A100 GPUs based on the mmpose codebase [11]
|
211 |
|
|
|
|
|
|
|
212 |
|
213 |
## Citation [optional]
|
214 |
|
|
|
224 |
archivePrefix={arXiv},
|
225 |
primaryClass={cs.CV},
|
226 |
url={https://arxiv.org/abs/2204.12484},
|
227 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|