Datasets:

Modalities:
Image
Size:
< 1K
Libraries:
Datasets
License:
Weiyun1025 commited on
Commit
6fc6c75
1 Parent(s): 82b4c72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -13,10 +13,15 @@ The benchmark consists of four splits:
13
  The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the subject-predicate-object triplets of the scene graph separately.
14
  Some data examples are shown below.
15
 
16
- ![crpe.jpg](https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg)
 
 
 
 
 
17
 
18
  For a robust evaluation, we adopt CircularEval as our evaluation strategy.
19
  Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices.
20
  In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model.
21
 
22
- See our [paper](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details!
 
13
  The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the subject-predicate-object triplets of the scene graph separately.
14
  Some data examples are shown below.
15
 
16
+ <img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg">
17
+
18
+ Additionally, to evaluate the dependency on language priors, we also include abnormal data in our evaluation.
19
+ These images in these abnormal data depict relation triplets that are very rare in the real world.
20
+
21
+ <img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/qKWw7Qb93OXClxI_VrCRk.jpeg">
22
 
23
  For a robust evaluation, we adopt CircularEval as our evaluation strategy.
24
  Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices.
25
  In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model.
26
 
27
+ See our [project](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details!