Update README
Browse files- README.md +9 -7
- modeling_aimv2.py +1 -1
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
- pytorch
|
12 |
---
|
13 |
# Introduction
|
14 |
-
[[`AIMv2 Paper`](
|
15 |
|
16 |
We introduce the AIMv2 family of vision models pre-trained with a multimodal autoregressive objective.
|
17 |
AIMv2 pre-training is simple and straightforward to train and scale effectively. Some AIMv2 highlights include:
|
@@ -69,12 +69,14 @@ outputs = model(**inputs)
|
|
69 |
## Citation
|
70 |
If you find our work useful, please consider citing us as:
|
71 |
```bibtex
|
72 |
-
@misc{
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
|
|
|
|
78 |
}
|
79 |
```
|
80 |
|
|
|
11 |
- pytorch
|
12 |
---
|
13 |
# Introduction
|
14 |
+
[[`AIMv2 Paper`](https://arxiv.org/abs/2411.14402)] [[`BibTeX`](#citation)]
|
15 |
|
16 |
We introduce the AIMv2 family of vision models pre-trained with a multimodal autoregressive objective.
|
17 |
AIMv2 pre-training is simple and straightforward to train and scale effectively. Some AIMv2 highlights include:
|
|
|
69 |
## Citation
|
70 |
If you find our work useful, please consider citing us as:
|
71 |
```bibtex
|
72 |
+
@misc{fini2024multimodalautoregressivepretraininglarge,
|
73 |
+
author = {Fini, Enrico and Shukor, Mustafa and Li, Xiujun and Dufter, Philipp and Klein, Michal and Haldimann, David and Aitharaju, Sai and da Costa, Victor Guilherme Turrisi and Béthune, Louis and Gan, Zhe and Toshev, Alexander T and Eichner, Marcin and Nabi, Moin and Yang, Yinfei and Susskind, Joshua M. and El-Nouby, Alaaeldin},
|
74 |
+
url = {https://arxiv.org/abs/2411.14402},
|
75 |
+
eprint = {2411.14402},
|
76 |
+
eprintclass = {cs.CV},
|
77 |
+
eprinttype = {arXiv},
|
78 |
+
title = {Multimodal Autoregressive Pre-training of Large Vision Encoders},
|
79 |
+
year = {2024},
|
80 |
}
|
81 |
```
|
82 |
|
modeling_aimv2.py
CHANGED
@@ -102,7 +102,7 @@ class AIMv2ViTPreprocessor(nn.Module):
|
|
102 |
pos_embed = get_sincos_pos_embed(
|
103 |
H // self.patch_h, W // self.patch_w, embed_dim=self.embed_dim
|
104 |
)
|
105 |
-
tokens = tokens + pos_embed
|
106 |
return tokens
|
107 |
|
108 |
|
|
|
102 |
pos_embed = get_sincos_pos_embed(
|
103 |
H // self.patch_h, W // self.patch_w, embed_dim=self.embed_dim
|
104 |
)
|
105 |
+
tokens = tokens + pos_embed
|
106 |
return tokens
|
107 |
|
108 |
|