FanLu31 commited on
Commit
586cb08
1 Parent(s): 08dc63f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -12,8 +12,14 @@ language:
12
  ### Dataset Description
13
  The CompreCap benchmark is characterized by human-annotated scene graph and focuses on the evaluation of comprehensive image captioning.
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
- Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relational descriptions between the objects, composing a complete and directed scene graph structure.
16
- Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.
 
 
 
 
 
 
17
 
18
  ### Licensing Information
19
  We distribute the image under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
 
12
  ### Dataset Description
13
  The CompreCap benchmark is characterized by human-annotated scene graph and focuses on the evaluation of comprehensive image captioning.
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
+ Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relation descriptions between the objects, composing a complete and directed scene graph structure:
16
+ <div align="center">
17
+ <p>
18
+ <img src="graph_anno.png" alt="CompreCap" width="500" height="auto">
19
+ </p>
20
+ </div>
21
+
22
+ The annotations of segmentation masks, category names, the descriptions of attributes and relationships are saved in [./anno.json](https://huggingface.co/datasets/FanLu31/CompreCap/blob/main/anno.json). Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.
23
 
24
  ### Licensing Information
25
  We distribute the image under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.