metadata
license: apache-2.0
ποΈ GLaMM-GranD-Pretrained
π Description
GLaMM-GranD-Pretrained is the model pretrained on GranD dataset, a large-scale dataset generated with automated annotation pipeline for detailed region-level understanding and segmentation masks. GranD comprises 7.5M unique concepts anchored in a total of 810M regions, each with a segmentation mask.
π» Download
To get started with GLaMM-GranD-Pretrained, follow these steps:
git lfs install
git clone https://huggingface.co/MBZUAI/GLaMM-GranD-Pretrained
π Additional Resources
- Paper: ArXiv.
- GitHub Repository: For updates: GitHub - GLaMM.
- Project Page: For a detailed overview and insights into the project, visit our Project Page - GLaMM.
π Citations and Acknowledgments
@article{hanoona2023GLaMM,
title={GLaMM: Pixel Grounding Large Multimodal Model},
author={Rasheed, Hanoona and Maaz, Muhammad and Shaji, Sahal and Shaker, Abdelrahman and Khan, Salman and Cholakkal, Hisham and Anwer, Rao M. and Xing, Eric and Yang, Ming-Hsuan and Khan, Fahad S.},
journal={ArXiv 2311.03356},
year={2023}
}