llava_clip_stage2 / README.md
shijiay's picture
Update README.md
73c3cc1 verified
|
raw
history blame
207 Bytes
metadata
pipeline_tag: image-text-to-text

This model is the stage 2 checkpoint of one of the thirteen settings, CLIP@336, used in the Law of Vision Representation in MLLMs.