tsunghanwu
commited on
Commit
β’
f23b8e3
1
Parent(s):
7a7b390
Update README.md
Browse files
README.md
CHANGED
@@ -5,24 +5,32 @@ license: mit
|
|
5 |
# Visual Haystacks Dataset Card
|
6 |
|
7 |
## Dataset details
|
8 |
-
Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Intended use
|
28 |
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
|
|
|
5 |
# Visual Haystacks Dataset Card
|
6 |
|
7 |
## Dataset details
|
8 |
+
1. Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
|
9 |
+
|
10 |
+
2. Data Preparation and Benchmarking
|
11 |
+
- Download the VQA questions:
|
12 |
+
```
|
13 |
+
huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
|
14 |
+
```
|
15 |
+
- Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
|
16 |
+
```
|
17 |
+
dataset/
|
18 |
+
βββ coco
|
19 |
+
β βββ annotations
|
20 |
+
β βββ test2017
|
21 |
+
β βββ val2017
|
22 |
+
βββ VHs_qa
|
23 |
+
βββ VHs_full
|
24 |
+
β βββ multi_needle
|
25 |
+
β βββ single_needle
|
26 |
+
βββ VHs_small
|
27 |
+
βββ multi_needle
|
28 |
+
βββ single_needle
|
29 |
+
```
|
30 |
+
- Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
|
31 |
+
|
32 |
+
|
33 |
+
3. Please check out our [project page](https://visual-haystacks.github.io) for more information. You can also send questions or comments about the model to [our github repo](https://github.com/visual-haystacks/vhs_benchmark/issues)
|
34 |
|
35 |
## Intended use
|
36 |
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
|