--- license: apache-2.0 --- CRPE is a benchmark designed to quantitatively evaluate the object recognition and relation comprehension ability of models. The evaluation is formulated as single-choice questions. The benchmark consists of four splits: **Existence**, **Subject**, **Predicate**, and **Object**. The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the subject-predicate-object triplets of the scene graph separately. Some data examples are shown below. ![crpe.jpg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg) See our [paper](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details!