Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:
Search is not available for this dataset
image
imagewidth (px)
498
8.28k
label
class label
0 classes
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

This is the benchmark dataset for "A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs"

The structure of the jsonl files is as follows:

  1. q-bench2-a1-dev.jsonl (with img_path, question, answer_candidates, correct_answer)
  2. q-bench2-a1-test.jsonl (with img_path, question, answer_candidates, without correct_answer)
  3. q-bench2-a2.jsonl (with img_path, empty response)

The img_path is organized as prefix + img1 + _cat_ + img2 + .jpg

For example, if the img_path is "llvisionqa_compare_dev\\00079.jpg_cat_09769.jpg.jpg", then the prefix is "llvisionqa_compare_dev", img1 is "00079.jpg", img2 is "09769.jpg".

You can use the function to get the image paths:

def get_img_names(img_path, prefix = "path_to_all_single_images"):
  img_paths = img_path.split('\\')[1][:-4].split("_cat_")
  img1_name = os.path.join(prefix,img_paths[0])
  img2_name = os.path.join(prefix,img_paths[1])
  return img1_name,img2_name

The image file structure is:

  1. all_single_images: all of the single images used, Baiduyunpan download link
  2. llvisionqa_compare_dev: the concatenated images for the dev subset of the perception-compare task
  3. llvisionqa_compare_test: the concatenated images for the test subset of the perception-compare task
  4. lldescribe_compare: the concatenated images for the description-compare task

Submission for test your own MLLM on q-bench2

  1. Perception-compare task (a1): organize your jsonl file "q-bench2-a1-test_(YOUR_MLLM_NAME).jsonl" as the structure of the provided "q-bench2-a1-dev.jsonl"
  2. Description-compare task (a2): simply complete the empty "response" of "q-bench2-a2.jsonl" file and rename into "q-bench2-a2_(YOUR_MLLM_NAME).jsonl"

Please contact any of the first authors to get the results of your MLLM with the submission files.

Zicheng Zhang, zzc1998@sjtu.edu.cn Haoning Wu, haoning001@e.ntu.edu.sg

Downloads last month
81
Edit dataset card