File size: 1,622 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
EPFL and Apple just released 4M-21: single any-to-any model that can do anything from text-to-image generation to generating depth masks! 🙀 Let's unpack 🧶  

![image_1](image_1.jpg)

4M is a multimodal training [framework](https://t.co/jztLublfSF) introduced by Apple and EPFL.  
Resulting model takes image and text and output image and text 🤩  
[Models](https://t.co/1LC0rAohEl) | [Demo](https://t.co/Ra9qbKcWeY)  

![video_1](video_1.mp4)

This model consists of transformer encoder and decoder, where the key to multimodality lies in input and output data: input and output tokens are decoded to generate bounding boxes, generated image's pixels, captions and more!  

![image_2](image_2.jpg)

This model also learnt to generate canny maps, SAM edges and other things for steerable text-to-image generation 🖼️  
The authors only added image-to-all capabilities for the demo, but you can try to use this model for text-to-image generation as well ☺️  

![image_3](image_3.jpg)  

In the project page you can also see the model's text-to-image and steered generation capabilities with model's own outputs as control masks! 

![video_2](video_2.mp4)


> [!TIP]
Ressources:  
[4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities](https://arxiv.org/abs/2406.09406) 
by Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir (2024) 
[GitHub](https://github.com/apple/ml-4m/) 

> [!NOTE]
[Original tweet](https://twitter.com/mervenoyann/status/1804138208814309626) (June 21, 2024)