roman-bachmann
commited on
Commit
•
f023fc4
1
Parent(s):
ddddd76
Init
Browse files- LICENSE +10 -0
- README.md +64 -0
- config.json +57 -0
- model.safetensors +3 -0
LICENSE
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sample Code License
|
2 |
+
Version: 1.1
|
3 |
+
|
4 |
+
IMPORTANT: This software is supplied to you by École Polytechnique Fédérale de Lausanne (“EPFL”) and Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this software.
|
5 |
+
|
6 |
+
In consideration of your agreement to abide by the following terms, and subject to these terms, EPFL and Apple (collectively, “Licensor”) grant you a personal, non-exclusive license, under Licensor’s copyrights in this original software (the " Software"), to use, reproduce, modify and redistribute the Software, with or without modifications, in source and/or binary forms for non-commercial use; provided that if you redistribute the Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Software. Neither the name, trademarks, service marks or logos of Licensor may be used to endorse or promote products derived from the Software without specific prior written permission from Licensor. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Licensor herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Software may be incorporated.
|
7 |
+
|
8 |
+
The Software is provided by Licensor on an "AS IS" basis. LICENSOR MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL LICENSOR BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
9 |
+
|
10 |
+
Copyright (C) 2024. All Rights Reserved.
|
README.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: sample-code-license
|
4 |
+
license_link: LICENSE
|
5 |
+
library_name: ml-4m
|
6 |
+
---
|
7 |
+
|
8 |
+
# 4M: Massively Multimodal Masked Modeling
|
9 |
+
|
10 |
+
*A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.*
|
11 |
+
|
12 |
+
[`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation)
|
13 |
+
|
14 |
+
Official implementation and pre-trained models for :
|
15 |
+
|
16 |
+
[**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br>
|
17 |
+
*[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
|
18 |
+
|
19 |
+
**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**, arXiv 2024 <br>
|
20 |
+
*[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)*
|
21 |
+
|
22 |
+
4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities.
|
23 |
+
Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models.
|
24 |
+
We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21).
|
25 |
+
|
26 |
+
|
27 |
+
## Installation
|
28 |
+
For install instructions, please see https://github.com/apple/ml-4m.
|
29 |
+
|
30 |
+
|
31 |
+
## Usage
|
32 |
+
|
33 |
+
This model can be loaded from Hugging Face Hub as follows:
|
34 |
+
```python
|
35 |
+
from fourm.models.fm import FM
|
36 |
+
fm = FM.from_pretrained('EPFL-VILAB/4M-21_L')
|
37 |
+
```
|
38 |
+
|
39 |
+
Please see [README_GENERATION.md](https://github.com/apple/ml-4m/blob/main/README_GENERATION.md) for more detailed instructions and https://github.com/apple/ml-4m for other 4M model and tokenizer checkpoints.
|
40 |
+
|
41 |
+
Safetensors checkpoints are hosted under https://huggingface.co/EPFL-VILAB/4M.
|
42 |
+
|
43 |
+
## Citation
|
44 |
+
|
45 |
+
If you find this repository helpful, please consider citing our work:
|
46 |
+
```
|
47 |
+
@inproceedings{4m,
|
48 |
+
title={{4M}: Massively Multimodal Masked Modeling},
|
49 |
+
author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir},
|
50 |
+
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
|
51 |
+
year={2023},
|
52 |
+
}
|
53 |
+
|
54 |
+
@article{4m21,
|
55 |
+
title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities},
|
56 |
+
author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir},
|
57 |
+
journal={arXiv 2024},
|
58 |
+
year={2024},
|
59 |
+
}
|
60 |
+
```
|
61 |
+
|
62 |
+
## License
|
63 |
+
|
64 |
+
The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
|
config.json
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"act_layer": "SiLU",
|
3 |
+
"decoder_depth": 24,
|
4 |
+
"dim": 1024,
|
5 |
+
"domains_in": [
|
6 |
+
"caption",
|
7 |
+
"t5_caption",
|
8 |
+
"det",
|
9 |
+
"metadata",
|
10 |
+
"human_poses",
|
11 |
+
"color_palette",
|
12 |
+
"sam_instance",
|
13 |
+
"rgb@224",
|
14 |
+
"tok_rgb@224",
|
15 |
+
"tok_normal@224",
|
16 |
+
"tok_depth@224",
|
17 |
+
"tok_semseg@224",
|
18 |
+
"tok_clip@224",
|
19 |
+
"tok_dinov2@224",
|
20 |
+
"tok_dinov2_global",
|
21 |
+
"tok_imagebind@224",
|
22 |
+
"tok_imagebind_global",
|
23 |
+
"tok_sam_edge@224",
|
24 |
+
"tok_canny_edge@224"
|
25 |
+
],
|
26 |
+
"domains_out": [
|
27 |
+
"caption",
|
28 |
+
"t5_caption",
|
29 |
+
"det",
|
30 |
+
"metadata",
|
31 |
+
"human_poses",
|
32 |
+
"color_palette",
|
33 |
+
"sam_instance",
|
34 |
+
"tok_rgb@224",
|
35 |
+
"tok_normal@224",
|
36 |
+
"tok_depth@224",
|
37 |
+
"tok_semseg@224",
|
38 |
+
"tok_clip@224",
|
39 |
+
"tok_dinov2@224",
|
40 |
+
"tok_dinov2_global",
|
41 |
+
"tok_imagebind@224",
|
42 |
+
"tok_imagebind_global",
|
43 |
+
"tok_sam_edge@224",
|
44 |
+
"tok_canny_edge@224"
|
45 |
+
],
|
46 |
+
"encoder_depth": 24,
|
47 |
+
"gated_mlp": true,
|
48 |
+
"image_size": 224,
|
49 |
+
"mlp_bias": false,
|
50 |
+
"mlp_ratio": 4,
|
51 |
+
"norm_bias": false,
|
52 |
+
"num_heads": 16,
|
53 |
+
"patch_size": 16,
|
54 |
+
"proj_bias": false,
|
55 |
+
"qkv_bias": false,
|
56 |
+
"share_modality_embeddings": false
|
57 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6a0d9f552522b4a96cb2037c1ceeb59c8bb657d39dd8bb1bf5b607e74cef4c7
|
3 |
+
size 6256957760
|