Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
SiweiWu commited on
Commit
58ccf65
1 Parent(s): 464c221

Update README.md

Browse files

change images of readme

Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -15,13 +15,10 @@ Our experiments reveal that on the MMRA benchmark, current multi-image LVLMs exh
15
  Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component.
16
  Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.
17
 
18
- <div align="center">
19
- <img src=./imgs/framework.png width=80% />
20
- </div>
21
 
22
- <div align="center">
23
- <img src=./imgs/main_result.png width=80% />
24
- </div>
25
 
26
  ---
27
  # Evaluateion Codes
 
15
  Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component.
16
  Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.
17
 
18
+ ![framework](./imgs/framework.png)
19
+
20
+ ![main_result](./imgs/main_result.png)
21
 
 
 
 
22
 
23
  ---
24
  # Evaluateion Codes