timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
b47a596
1 Parent(s): 38d2954

Update model config and README

Browse files
Files changed (2) hide show
  1. README.md +134 -2
  2. model.safetensors +3 -0
README.md CHANGED
@@ -2,6 +2,138 @@
2
  tags:
3
  - image-classification
4
  - timm
5
- library_tag: timm
 
 
 
6
  ---
7
- # Model card for mobilenetv3_large_100.miil_in21k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - image-classification
4
  - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-21k-p
9
  ---
10
+ # Model card for mobilenetv3_large_100.miil_in21k
11
+
12
+ A MobileNet-v3 image classification model. Trained on ImageNet-21k-P by Alibaba MIIL.
13
+
14
+
15
+ ## Model Details
16
+ - **Model Type:** Image classification / feature backbone
17
+ - **Model Stats:**
18
+ - Params (M): 18.6
19
+ - GMACs: 0.2
20
+ - Activations (M): 4.4
21
+ - Image size: 224 x 224
22
+ - **Papers:**
23
+ - Searching for MobileNetV3: https://arxiv.org/abs/1905.02244
24
+ - **Dataset:** ImageNet-21k-P
25
+
26
+ ## Model Usage
27
+ ### Image Classification
28
+ ```python
29
+ from urllib.request import urlopen
30
+ from PIL import Image
31
+ import timm
32
+
33
+ img = Image.open(urlopen(
34
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
35
+ ))
36
+
37
+ model = timm.create_model('mobilenetv3_large_100.miil_in21k', pretrained=True)
38
+ model = model.eval()
39
+
40
+ # get model specific transforms (normalization, resize)
41
+ data_config = timm.data.resolve_model_data_config(model)
42
+ transforms = timm.data.create_transform(**data_config, is_training=False)
43
+
44
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
45
+
46
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
47
+ ```
48
+
49
+ ### Feature Map Extraction
50
+ ```python
51
+ from urllib.request import urlopen
52
+ from PIL import Image
53
+ import timm
54
+
55
+ img = Image.open(urlopen(
56
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
57
+ ))
58
+
59
+ model = timm.create_model(
60
+ 'mobilenetv3_large_100.miil_in21k',
61
+ pretrained=True,
62
+ features_only=True,
63
+ )
64
+ model = model.eval()
65
+
66
+ # get model specific transforms (normalization, resize)
67
+ data_config = timm.data.resolve_model_data_config(model)
68
+ transforms = timm.data.create_transform(**data_config, is_training=False)
69
+
70
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
71
+
72
+ for o in output:
73
+ # print shape of each feature map in output
74
+ # e.g.:
75
+ # torch.Size([1, 16, 112, 112])
76
+ # torch.Size([1, 24, 56, 56])
77
+ # torch.Size([1, 40, 28, 28])
78
+ # torch.Size([1, 112, 14, 14])
79
+ # torch.Size([1, 960, 7, 7])
80
+
81
+ print(o.shape)
82
+ ```
83
+
84
+ ### Image Embeddings
85
+ ```python
86
+ from urllib.request import urlopen
87
+ from PIL import Image
88
+ import timm
89
+
90
+ img = Image.open(urlopen(
91
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
92
+ ))
93
+
94
+ model = timm.create_model(
95
+ 'mobilenetv3_large_100.miil_in21k',
96
+ pretrained=True,
97
+ num_classes=0, # remove classifier nn.Linear
98
+ )
99
+ model = model.eval()
100
+
101
+ # get model specific transforms (normalization, resize)
102
+ data_config = timm.data.resolve_model_data_config(model)
103
+ transforms = timm.data.create_transform(**data_config, is_training=False)
104
+
105
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
106
+
107
+ # or equivalently (without needing to set num_classes=0)
108
+
109
+ output = model.forward_features(transforms(img).unsqueeze(0))
110
+ # output is unpooled, a (1, 960, 7, 7) shaped tensor
111
+
112
+ output = model.forward_head(output, pre_logits=True)
113
+ # output is a (1, num_features) shaped tensor
114
+ ```
115
+
116
+ ## Model Comparison
117
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
118
+
119
+ ## Citation
120
+ ```bibtex
121
+ @inproceedings{howard2019searching,
122
+ title={Searching for mobilenetv3},
123
+ author={Howard, Andrew and Sandler, Mark and Chu, Grace and Chen, Liang-Chieh and Chen, Bo and Tan, Mingxing and Wang, Weijun and Zhu, Yukun and Pang, Ruoming and Vasudevan, Vijay and others},
124
+ booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
125
+ pages={1314--1324},
126
+ year={2019}
127
+ }
128
+ ```
129
+ ```bibtex
130
+ @misc{rw2019timm,
131
+ author = {Ross Wightman},
132
+ title = {PyTorch Image Models},
133
+ year = {2019},
134
+ publisher = {GitHub},
135
+ journal = {GitHub repository},
136
+ doi = {10.5281/zenodo.4414861},
137
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
138
+ }
139
+ ```
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6b2092b19ddae6268b28b5fa2e7e320e322bd207c628b2ab3c5d91b45ced53a
3
+ size 74430412