PVD-160K / README.md
mikewang's picture
Update README.md
44a96c1 verified
|
raw
history blame
1.99 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: conversations
      list:
        - name: role
          dtype: string
        - name: content
          dtype: string
  splits:
    - name: train
      num_bytes: 277884785
      num_examples: 160000
  download_size: 126665150
  dataset_size: 277884785
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Text-Based Reasoning About Vector Graphics

🌐 Homepage📃 Paper🤗 Data (PVD-160k)🤗 Model (PVD-160k-Mistral-7b)💻 Code

We observe that current large multimodal models (LMMs) still struggle with seemingly straightforward reasoning tasks that require precise perception of low-level visual details, such as identifying spatial relations or solving simple mazes. In particular, this failure mode persists in question-answering tasks about vector graphics—images composed purely of 2D objects and shapes.

Teaser

To solve this challenge, we propose Visually Descriptive Language Model (VDLM), a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our paper (coming soon) for more details.

Overview