rwightman HF staff commited on
Commit
b074695
1 Parent(s): 8bac47e

Update model config and README

Browse files
Files changed (3) hide show
  1. README.md +123 -1
  2. config.json +2 -1
  3. model.safetensors +3 -0
README.md CHANGED
@@ -3,5 +3,127 @@ tags:
3
  - image-classification
4
  - timm
5
  library_tag: timm
 
 
 
 
6
  ---
7
- # Model card for eva_large_patch14_336.in22k_ft_in1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - image-classification
4
  - timm
5
  library_tag: timm
6
+ license: mit
7
+ datasets:
8
+ - imagenet-1k
9
+ - imagenet-22k
10
  ---
11
+ # Model card for eva_large_patch14_336.in22k_ft_in1k
12
+
13
+ An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors.
14
+
15
+ NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
16
+
17
+
18
+ ## Model Details
19
+ - **Model Type:** Image classification / feature backbone
20
+ - **Model Stats:**
21
+ - Params (M): 304.5
22
+ - GMACs: 191.1
23
+ - Activations (M): 270.2
24
+ - Image size: 336 x 336
25
+ - **Papers:**
26
+ - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636
27
+ - **Pretrain Dataset:** ImageNet-22k
28
+ - **Dataset:** ImageNet-1k
29
+ - **Original:**
30
+ - https://github.com/baaivision/EVA
31
+ - https://huggingface.co/BAAI/EVA
32
+
33
+ ## Model Usage
34
+ ### Image Classification
35
+ ```python
36
+ from urllib.request import urlopen
37
+ from PIL import Image
38
+ import timm
39
+
40
+ img = Image.open(urlopen(
41
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
42
+ ))
43
+
44
+ model = timm.create_model('eva_large_patch14_336.in22k_ft_in1k', pretrained=True)
45
+ model = model.eval()
46
+
47
+ # get model specific transforms (normalization, resize)
48
+ data_config = timm.data.resolve_model_data_config(model)
49
+ transforms = timm.data.create_transform(**data_config, is_training=False)
50
+
51
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
52
+
53
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
54
+ ```
55
+
56
+ ### Image Embeddings
57
+ ```python
58
+ from urllib.request import urlopen
59
+ from PIL import Image
60
+ import timm
61
+
62
+ img = Image.open(urlopen(
63
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
64
+ ))
65
+
66
+ model = timm.create_model(
67
+ 'eva_large_patch14_336.in22k_ft_in1k',
68
+ pretrained=True,
69
+ num_classes=0, # remove classifier nn.Linear
70
+ )
71
+ model = model.eval()
72
+
73
+ # get model specific transforms (normalization, resize)
74
+ data_config = timm.data.resolve_model_data_config(model)
75
+ transforms = timm.data.create_transform(**data_config, is_training=False)
76
+
77
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
78
+
79
+ # or equivalently (without needing to set num_classes=0)
80
+
81
+ output = model.forward_features(transforms(img).unsqueeze(0))
82
+ # output is unpooled, a (1, 577, 1024) shaped tensor
83
+
84
+ output = model.forward_head(output, pre_logits=True)
85
+ # output is a (1, num_features) shaped tensor
86
+ ```
87
+
88
+ ## Model Comparison
89
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
90
+
91
+ |model |top1 |top5 |param_count|img_size|
92
+ |-----------------------------------------------|------|------|-----------|--------|
93
+ |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
94
+ |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
95
+ |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
96
+ |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
97
+ |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
98
+ |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
99
+ |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
100
+ |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
101
+ |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
102
+ |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
103
+ |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
104
+ |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
105
+ |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
106
+ |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
107
+ |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
108
+ |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
109
+
110
+ ## Citation
111
+ ```bibtex
112
+ @article{EVA,
113
+ title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
114
+ author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
115
+ journal={arXiv preprint arXiv:2211.07636},
116
+ year={2022}
117
+ }
118
+ ```
119
+ ```bibtex
120
+ @misc{rw2019timm,
121
+ author = {Ross Wightman},
122
+ title = {PyTorch Image Models},
123
+ year = {2019},
124
+ publisher = {GitHub},
125
+ journal = {GitHub repository},
126
+ doi = {10.5281/zenodo.4414861},
127
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
128
+ }
129
+ ```
config.json CHANGED
@@ -28,6 +28,7 @@
28
  "num_classes": 1000,
29
  "pool_size": null,
30
  "first_conv": "patch_embed.proj",
31
- "classifier": "head"
 
32
  }
33
  }
 
28
  "num_classes": 1000,
29
  "pool_size": null,
30
  "first_conv": "patch_embed.proj",
31
+ "classifier": "head",
32
+ "license": "mit"
33
  }
34
  }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0409dd39f5e5b8f2b9c28524e070f056eaacf68e6092e7079b345035ed1e60e8
3
+ size 1218153886