Prasanna Iyer's picture

Prasanna Iyer

prasiyer

AI & ML interests

None yet

Recent Activity

replied to merve's post 7 months ago
Chameleon ๐ŸฆŽ by Meta is now available in Hugging Face transformers ๐Ÿ˜ A vision language model that comes in 7B and 34B sizes ๐Ÿคฉ But what makes this model so special? Demo: https://huggingface.co/spaces/merve/chameleon-7b Models: https://huggingface.co/collections/facebook/chameleon-668da9663f80d483b4c61f58 keep reading โฅฅ Chameleon is a unique model: it attempts to scale early fusion ๐Ÿคจ But what is early fusion? Modern vision language models use a vision encoder with a projection layer to project image embeddings so it can be promptable to text decoder (LLM) Early fusion on the other hand attempts to fuse all features together (image patches and text) by using an image tokenizer and all tokens are projected into a shared space, which enables seamless generation ๐Ÿ˜ Authors have also introduced different architectural improvements (QK norm and revise placement of layer norms) for scalable and stable training and they were able to increase the token count (5x tokens compared to Llama 3 which is a must with early-fusion IMO) This model is an any-to-any model thanks to early fusion: it can take image and text input and output image and text, but image generation are disabled to prevent malicious use. One can also do text-only prompting, authors noted the model catches up with larger LLMs (like Mixtral 8x7B or larger Llama-2 70B) and also image-pair prompting with larger VLMs like IDEFICS2-80B (see paper for the benchmarks https://huggingface.co/papers/2405.09818) Thanks for reading!
replied to merve's post 7 months ago
Chameleon ๐ŸฆŽ by Meta is now available in Hugging Face transformers ๐Ÿ˜ A vision language model that comes in 7B and 34B sizes ๐Ÿคฉ But what makes this model so special? Demo: https://huggingface.co/spaces/merve/chameleon-7b Models: https://huggingface.co/collections/facebook/chameleon-668da9663f80d483b4c61f58 keep reading โฅฅ Chameleon is a unique model: it attempts to scale early fusion ๐Ÿคจ But what is early fusion? Modern vision language models use a vision encoder with a projection layer to project image embeddings so it can be promptable to text decoder (LLM) Early fusion on the other hand attempts to fuse all features together (image patches and text) by using an image tokenizer and all tokens are projected into a shared space, which enables seamless generation ๐Ÿ˜ Authors have also introduced different architectural improvements (QK norm and revise placement of layer norms) for scalable and stable training and they were able to increase the token count (5x tokens compared to Llama 3 which is a must with early-fusion IMO) This model is an any-to-any model thanks to early fusion: it can take image and text input and output image and text, but image generation are disabled to prevent malicious use. One can also do text-only prompting, authors noted the model catches up with larger LLMs (like Mixtral 8x7B or larger Llama-2 70B) and also image-pair prompting with larger VLMs like IDEFICS2-80B (see paper for the benchmarks https://huggingface.co/papers/2405.09818) Thanks for reading!
View all activity

Organizations

None yet

models

None public yet

datasets

None public yet