Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# <img src="assets/icon.png" width="35" /> ReFocus
|
2 |
+
|
3 |
+
This repo contains the model for the paper "ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding"
|
4 |
+
|
5 |
+
[**🌐 Homepage**](https://zeyofu.github.io/ReFocus/) |[**📑 Paper**](https://arxiv.org/abs/2501.05452) | [**🔗 Code**](https://github.com/zeyofu/ReFocus_Code)
|
6 |
+
|
7 |
+
|
8 |
+
# Introduction
|
9 |
+
|
10 |
+
![Alt text](assets/teaser.png)
|
11 |
+
|
12 |
+
# ReFocus Finetuning
|
13 |
+
We follow the [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook/blob/main/md/04.Fine-tuning/FineTuning_Vision.md) for the supervised finetuning experiments.
|
14 |
+
|
15 |
+
## Inference with the Finetuned Model
|
16 |
+
We release our best finetuned ReFocus model with full chain-of-thought data in this [HuggingFace Link](https://huggingface.co/Fiaa/ReFocus).
|
17 |
+
|
18 |
+
This model is finetuned based on Phi-3.5-vision, and we used the following prompt during evaluation
|
19 |
+
```
|
20 |
+
<|image|>\n{question}\nThought:
|
21 |
+
```
|
22 |
+
To enforce the model to generate bounding box coordinates to refocus, you could try this prompt:
|
23 |
+
```
|
24 |
+
<|image_1|>\n{question}\nThought: The areas to focus on in the image have bounding box coordinates:
|
25 |
+
```
|
26 |
+
|
27 |
+
---
|
28 |
+
license: apache-2.0
|
29 |
+
---
|