|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Circular-based Relation Probing Evaluation (CRPE) |
|
|
|
CRPE is a benchmark designed to quantitatively evaluate the object recognition and relation comprehension ability of models. |
|
The evaluation is formulated as single-choice questions. |
|
|
|
The benchmark consists of four splits: |
|
**Existence**, **Subject**, **Predicate**, and **Object**. |
|
|
|
The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the relation triplets `(subject, predicate, object)` separately. |
|
Some data examples are shown below. |
|
|
|
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg"> |
|
|
|
Additionally, to evaluate the dependency on language priors, we also include abnormal data in our evaluation. |
|
These images in these abnormal data depict relation triplets that are very rare in the real world. |
|
|
|
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/qKWw7Qb93OXClxI_VrCRk.jpeg"> |
|
|
|
For a robust evaluation, we adopt CircularEval as our evaluation strategy. |
|
Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices. |
|
In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model. |
|
|
|
CRPE contains the following files: |
|
- `crpe_exist.jsonl`: the evaluation data of **Existence** split. |
|
- `crpe_exist_meta.jsonl`: the evaluation data of **Existence** split without CircularEval. |
|
- `crpe_relation.jsonl`: the evaluation data of **Subject**, **Predicate**, and **Object** split. |
|
- `crpe_relation_meta.jsonl`: the evaluation data of **Subject**, **Predicate**, and **Object** split without CircularEval. |
|
|
|
**NOTE**: You should use `crpe_exist.jsonl` and `crpe_relation.jsonl` for evaluation. The evaluation script is presented [here](https://github.com/OpenGVLab/all-seeing/blob/main/all-seeing-v2/llava/eval/eval_crpe.py). |
|
|
|
See our [project](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details! |
|
|
|
# Citation |
|
|
|
If you find our work useful in your research, please consider cite: |
|
|
|
```BibTeX |
|
@article{wang2023allseeing, |
|
title={The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World}, |
|
author={Wang, Weiyun and Shi, Min and Li, Qingyun and Wang, Wenhai and Huang, Zhenhang and Xing, Linjie and Chen, Zhe and Li, Hao and Zhu, Xizhou and Cao, Zhiguo and others}, |
|
journal={arXiv preprint arXiv:2308.01907}, |
|
year={2023} |
|
} |
|
@article{wang2024allseeing_v2, |
|
title={The All-Seeing Project V2: Towards General Relation Comprehension of the Open World}, |
|
author={Wang, Weiyun and Ren, Yiming and Luo, Haowen and Li, Tiantong and Yan, Chenxiang and Chen, Zhe and Wang, Wenhai and Li, Qingyun and Lu, Lewei and Zhu, Xizhou and others}, |
|
journal={arXiv preprint arXiv:2402.19474}, |
|
year={2024} |
|
} |
|
``` |