pretty_name: SPEC
task_categories:
- image-to-text
- text-to-image
- image-classification
tags:
- image
- text
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
[CVPR 2024] SPEC Benchmark: Evaluating VLMs in Fine-grained and Compositional Understanding
introduced in the CVPR 2024 paper Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language Understanding
To evaluate the understanding capability of visual-language models on fine-grained concepts, we propose a new benchmark, SPEC, which consists of six distinct subsets, distributed across the dimensions of Size, Position, Existence, and Count. Each test case consists of an image candidate set, which differs only in certain visual concepts, and a text candidate set, which differs only in the corresponding language concept.
🔧 Usage
install
git clone https://github.com/wjpoom/SPEC.git
cd SPEC/
pip install -e .
prepare data
- run the following code in Python shell, replace
/path/to/save/data
with a specified dir to store the data.
import zipfile
import os
from huggingface_hub import hf_hub_download
data_root = '/path/to/save/data'
hf_hub_download(repo_id='wjpoom/SPEC', repo_type='dataset', filename='data.zip', local_dir=data_root)
with zipfile.ZipFile(os.path.join(data_root, 'data.zip'), 'r') as zip_ref:
zip_ref.extractall(os.path.join(data_root))
os.remove(os.path.join(data_root, 'data.zip'))
explore the dataset
- We provide a 📓notebook that enables you to visually explore the test samples in the SPEC dataset.
- Run this notebook either locally or online using Colab.
reproduce the results
- In our paper, we evaluated four popular VLMs using our SPEC dataset, namely: CLIP, BLIP, FLAVA and CoCa.
- To reproduce the results with these VLMs, you can run this script.
- You can also reproduce with this local notebook or the online Colab notebook.
evaluate custom VLMs
If you want to evaluate your custom model on SPEC, you can follow the instructions in this document.
✒️ Citation
If you use our code or data in this repo or find our work helpful, please consider giving a citation:
@inproceedings{spec2024,
title={Synthesize Diagnose and Optimize: Towards Fine-Grained Vision-Language Understanding},
author={Peng, Wujian and Xie, Sicheng and You, Zuyao and Lan, Shiyi and Wu, Zuxuan},
booktitle={CVPR},
year={2024}
}