gghfez/Llama-3.3-90B-Vision-merged

Since Meta have finished up with Llama3 and likely won't be releasing a version 3.3 for their vision model, I've swapped out all the text layers from Llama-3.2-90B-Vision-Instruct (which are identical to Llama-3.1-70B-instruct) with those from Llama-3.3-70b-instruct) so we can get the benefits of Llama3.3-70b-Instruct when doing vision tasks.

I've switched to this model now and it's working as expected. If anyone has a comprehensive vision benchmark let me know. I'd be curious to see if there's a measurable performance improvement.

Downloads last month
26
Safetensors
Model size
88.6B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for gghfez/Llama-3.3-90B-Vision-merged

Finetuned
(6)
this model