wjpoom commited on
Commit
e6c68ae
1 Parent(s): 694dfdc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: SPEC
3
+ task_categories:
4
+ - image-to-text
5
+ - text-to-image
6
+ - image-classification
7
+ tags:
8
+ - image
9
+ - text
10
+ language:
11
+ - en
12
+ license: apache-2.0
13
+ size_categories:
14
+ - 1K<n<10K
15
+ ---
16
+
17
+ # [CVPR 2024] SPEC Benchmark: Evaluating VLMs in Fine-grained and Compositional Understanding
18
+ introduced in the CVPR 2024 paper [Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language Understanding](https://huggingface.co/papers/2312.00081)
19
+
20
+ [**Code**](https://github.com/wjpoom/SPEC) | [**🤗 Paper**](https://huggingface.co/papers/2312.00081) | [**📖 arXiv**](https://arxiv.org/abs/2312.00081)
21
+
22
+ To evaluate the understanding capability of visual-language models on fine-grained concepts, we propose a new benchmark, SPEC,
23
+ which consists of six distinct subsets, distributed across the dimensions of **S**ize, **P**osition, **E**xistence, and **C**ount.
24
+ Each test case consists of an image candidate set, which differs only in certain visual concepts, and a text candidate set,
25
+ which differs only in the corresponding language concept.
26
+ <p align="center">
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/649bce4f200e2dff194d9883/sE65-zVjY_HXUT4-eaqZ9.png" width="90%"/>
28
+ <be>
29
+ </p>
30
+
31
+ ## 🔧 Usage
32
+ ### install
33
+ ``` shell
34
+ git clone https://github.com/wjpoom/SPEC.git
35
+ cd SPEC/
36
+ pip install -e .
37
+ ```
38
+ ### prepare data
39
+ * run the following code in Python shell, replace `/path/to/save/data` with a specified dir to store the data.
40
+ ```python
41
+ import zipfile
42
+ import os
43
+ from huggingface_hub import hf_hub_download
44
+
45
+ data_root = '/path/to/save/data'
46
+ hf_hub_download(repo_id='wjpoom/SPEC', repo_type='dataset', filename='data.zip', local_dir=data_root)
47
+
48
+ with zipfile.ZipFile(os.path.join(data_root, 'data.zip'), 'r') as zip_ref:
49
+ zip_ref.extractall(os.path.join(data_root))
50
+
51
+ os.remove(os.path.join(data_root, 'data.zip'))
52
+ ```
53
+ ### explore the dataset
54
+ * We provide a 📓notebook that enables you to visually explore the test samples in the SPEC dataset.
55
+ * Run this notebook either [locally](https://github.com/wjpoom/SPEC/blob/main/notebooks/explore_spec_local.ipynb) or online using [Colab](https://colab.research.google.com/github/wjpoom/SPEC/blob/main/notebooks/explore_spec_colab.ipynb).
56
+
57
+ ### reproduce the results
58
+ * In our paper, we evaluated four popular VLMs using our SPEC dataset, namely: CLIP, BLIP, FLAVA and CoCa.
59
+ * To reproduce the results with these VLMs, you can run [this script](https://github.com/wjpoom/SPEC/blob/main/spec/run_eval.sh).
60
+ * You can also reproduce with this [local notebook](https://github.com/wjpoom/SPEC/blob/main/notebooks/evaluate_example_local.ipynb) or the online [Colab notebook](https://colab.research.google.com/github/wjpoom/SPEC/blob/main/notebooks/evaluate_example_colab.ipynb).
61
+
62
+ ### evaluate custom VLMs
63
+ * If you want to evaluate your custom model on SPEC, you can follow the instructions in [this document](https://github.com/wjpoom/SPEC/blob/main/docs/evaluate_custom_model.md).
64
+
65
+ * ## ✒️ Citation
66
+ If you use our code or data in this repo or find our work helpful, please consider giving a citation:
67
+
68
+ ```
69
+ @inproceedings{spec2024,
70
+ title={Synthesize Diagnose and Optimize: Towards Fine-Grained Vision-Language Understanding},
71
+ author={Peng, Wujian and Xie, Sicheng and You, Zuyao and Lan, Shiyi and Wu, Zuxuan},
72
+ booktitle={CVPR},
73
+ year={2024}
74
+ }
75
+ ```