Datasets:

Modalities:
Image
Size:
< 1K
Libraries:
Datasets
License:
File size: 1,283 Bytes
b2b7f52
 
 
8320a57
7275f7f
 
8320a57
 
 
 
 
d4cdadb
8320a57
 
 
 
 
82b4c72
 
 
 
8320a57
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
---

# Circular-based Relation Probing Evaluation (CRPE)

CRPE is a benchmark designed to quantitatively evaluate the object recognition and relation comprehension ability of models.
The evaluation is formulated as single-choice questions.

The benchmark consists of four splits:
**Existence**, **Subject**, **Predicate**, and **Object**.

The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the subject-predicate-object triplets of the scene graph separately.
Some data examples are shown below.

![crpe.jpg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg)

For a robust evaluation, we adopt CircularEval as our evaluation strategy.
Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices.
In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model.

See our [paper](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details!