Spaces:
Runtime error
Runtime error
ο»ΏReal-time DEtection Transformer (RT-DETR) landed in @huggingface transformers π€© with Apache 2.0 license π | |
Do DETRs Beat YOLOs on Real-time Object Detection? keep reading π | |
![video_1](video_1.mp4) | |
Short answer, it does! | |
π [notebook](https://t.co/NNRpG9cAEa), π [models](https://t.co/ctwWQqNcEt), π [demo](https://t.co/VrmDDDjoNw) | |
YOLO models are known to be super fast for real-time computer vision, but they have a downside with being volatile to NMS π₯² | |
Transformer-based models on the other hand are computationally not as efficient π₯² Isn't there something in between? Enter RT-DETR! | |
The authors combined CNN backbone, multi-stage hybrid decoder (combining convs and attn) with a transformer decoder β | |
![image_1](image_1.jpg) | |
In the paper, authors also claim one can adjust speed by changing decoder layers without retraining altogether they also conduct many ablation studies and try different decoders (see below) | |
![image_2](image_2.jpg) | |
The authors find out that the model performs better in terms of speed and accuracy compared to the previous state-of-the-art π€© | |
![image_3](image_3.jpg) | |
According to authors' findings, it performs way better than many of the existing models (including proprietary VLMs) and scales very well (on text decoder) | |
> [!TIP] | |
Ressources: | |
[DETRs Beat YOLOs on Real-time Object Detection](https://arxiv.org/abs/2304.08069) | |
by Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, Jie Chen (2023) | |
[GitHub](https://github.com/lyuwenyu/RT-DETR/) | |
[Hugging Face documentation](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr) | |
> [!NOTE] | |
[Original tweet](https://twitter.com/mervenoyann/status/1807790959884665029) (July 1, 2024) |