EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality
Abstract
For the deployment of neural networks in resource-constrained environments, prior works have built lightweight architectures with convolution and attention for capturing local and global dependencies, respectively. Recently, the state space model has emerged as an effective global token interaction with its favorable linear computational cost in the number of tokens. Yet, efficient vision backbones built with SSM have been explored less. In this paper, we introduce Efficient Vision Mamba (EfficientViM), a novel architecture built on hidden state mixer-based state space duality (HSM-SSD) that efficiently captures global dependencies with further reduced computational cost. In the HSM-SSD layer, we redesign the previous SSD layer to enable the channel mixing operation within hidden states. Additionally, we propose multi-stage hidden state fusion to further reinforce the representation power of hidden states, and provide the design alleviating the bottleneck caused by the memory-bound operations. As a result, the EfficientViM family achieves a new state-of-the-art speed-accuracy trade-off on ImageNet-1k, offering up to a 0.7% performance improvement over the second-best model SHViT with faster speed. Further, we observe significant improvements in throughput and accuracy compared to prior works, when scaling images or employing distillation training. Code is available at https://github.com/mlvlab/EfficientViM.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MobileMamba: Lightweight Multi-Receptive Visual Mamba Network (2024)
- TinyViM: Frequency Decoupling for Tiny Hybrid Vision Mamba (2024)
- Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion (2024)
- QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model (2024)
- HRVMamba: High-Resolution Visual State Space Model for Dense Prediction (2024)
- AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation (2024)
- V2M: Visual 2-Dimensional Mamba for Image Representation Learning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper