amd
/

timm
ONNX
PyTorch
RyzenAI
vision
classification
mnasnet_b1 / README.md
zhengrongzhang's picture
Update README.md (#1)
c02e05d
---
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
tags:
- RyzenAI
- vision
- classification
- pytorch
- timm
---
# MNASNet_b1
Quantized MNASNet_b1 model that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
## Model description
MNASNet was first introduced in the paper [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626).
The model implementation is from [timm](https://huggingface.co/timm/mnasnet_100.rmsp_in1k).
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation
Follow [ImageNet](https://huggingface.co/datasets/imagenet-1k) to prepare dataset.
### Model Evaluation
```python
python eval_onnx.py --onnx_model mnasnet_b1_int.onnx --ipu --provider_config Path\To\vaip_config.json --data_dir /Path/To/Your/Dataset
```
### Performance
|Metric |Accuracy on IPU|
| :----: | :----: |
|Top1/Top5| 73.51% / 91.56% |
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{tan2019mnasnet,
title={Mnasnet: Platform-aware neural architecture search for mobile},
author={Tan, Mingxing and Chen, Bo and Pang, Ruoming and Vasudevan, Vijay and Sandler, Mark and Howard, Andrew and Le, Quoc V},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={2820--2828},
year={2019}
}
```