Update README.md
Browse fileschange images of readme
README.md
CHANGED
@@ -15,13 +15,10 @@ Our experiments reveal that on the MMRA benchmark, current multi-image LVLMs exh
|
|
15 |
Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component.
|
16 |
Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
|
22 |
-
<div align="center">
|
23 |
-
<img src=./imgs/main_result.png width=80% />
|
24 |
-
</div>
|
25 |
|
26 |
---
|
27 |
# Evaluateion Codes
|
|
|
15 |
Additionally, our findings indicate that while LVLMs demonstrate a strong capability to perceive image details, enhancing their ability to associate information across multiple images hinges on improving the reasoning capabilities of their language model component.
|
16 |
Moreover, we explored the ability of LVLMs to perceive image sequences within the context of our multi-image association task. Our experiments indicate that the majority of current LVLMs do not adequately model image sequences during the pre-training process.
|
17 |
|
18 |
+
![framework](./imgs/framework.png)
|
19 |
+
|
20 |
+
![main_result](./imgs/main_result.png)
|
21 |
|
|
|
|
|
|
|
22 |
|
23 |
---
|
24 |
# Evaluateion Codes
|