tsunghanwu commited on
Commit
f23b8e3
β€’
1 Parent(s): 7a7b390

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -18
README.md CHANGED
@@ -5,24 +5,32 @@ license: mit
5
  # Visual Haystacks Dataset Card
6
 
7
  ## Dataset details
8
- Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
9
-
10
- ```
11
- dataset/
12
- └── coco
13
- β”œβ”€β”€ annotations
14
- β”œβ”€β”€ test2017
15
- β”œβ”€β”€ train2017
16
- └── val2017
17
- ```
18
-
19
- ## Dataset date: VHs was collected in April 2024, directly derived from COCO's image and object annotations.
20
-
21
- ## Paper or resources for more information: [TODO]
22
-
23
- ## License: [TODO]
24
-
25
- Where to send questions or comments about the model: https://github.com/visual-haystacks/[TODO]/issues
 
 
 
 
 
 
 
 
26
 
27
  ## Intended use
28
  Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
 
5
  # Visual Haystacks Dataset Card
6
 
7
  ## Dataset details
8
+ 1. Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
9
+
10
+ 2. Data Preparation and Benchmarking
11
+ - Download the VQA questions:
12
+ ```
13
+ huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
14
+ ```
15
+ - Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
16
+ ```
17
+ dataset/
18
+ β”œβ”€β”€ coco
19
+ β”‚ β”œβ”€β”€ annotations
20
+ β”‚ β”œβ”€β”€ test2017
21
+ β”‚ └── val2017
22
+ └── VHs_qa
23
+ β”œβ”€β”€ VHs_full
24
+ β”‚ β”œβ”€β”€ multi_needle
25
+ β”‚ └── single_needle
26
+ └── VHs_small
27
+ β”œβ”€β”€ multi_needle
28
+ └── single_needle
29
+ ```
30
+ - Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
31
+
32
+
33
+ 3. Please check out our [project page](https://visual-haystacks.github.io) for more information. You can also send questions or comments about the model to [our github repo](https://github.com/visual-haystacks/vhs_benchmark/issues)
34
 
35
  ## Intended use
36
  Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.