Monkey / README.md
echo840's picture
Update README.md
6487a89 verified
|
raw
history blame
6.94 kB

Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models

Zhang Li*, Biao Yang*, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu†, Xiang Bai†
Huazhong University of Science and Technology, Kingsoft

Paper   |   Detailed Caption   |   Model Weight   | Model Weight in wisemodel  


Monkey brings a training-efficient approach to effectively improve the input resolution capacity up to 896 x 1344 pixels without pretraining from the start. To bridge the gap between simple text labels and high input resolution, we propose a multi-level description generation method, which automatically provides rich information that can guide the model to learn the contextual association between scenes and objects. With the synergy of these two designs, our model achieved excellent results on multiple benchmarks. By comparing our model with various LMMs, including GPT4V, our model demonstrates promising performance in image captioning by paying attention to textual information and capturing fine details within the images; its improved input resolution also enables remarkable performance in document images with dense text.

Spotlights

  • Contextual associations. Our method demonstrates a superior ability to infer the relationships between targets more effectively when answering questions, which results in delivering more comprehensive and insightful results.
  • Support resolution up to 1344 x 896. Surpassing the standard 448 x 448 resolution typically employed for LMMs, this significant increase in resolution augments the ability to discern and understand unnoticeable or tightly clustered objects and dense text.
  • Enhanced general performance. We carried out testing across 16 diverse datasets, leading to impressive performance by our Monkey model in tasks such as Image Captioning, General Visual Question Answering, Text-centric Visual Question Answering, and Document-oriented Visual Question Answering.

Environment

conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt

Demo

Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.

We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:

  1. Make sure you have configured the environment.
  2. You can choose to use the demo offline or online:
  • Offline:
    • Download the Model Weight.
    • Modify DEFAULT_CKPT_PATH="pathto/Monkey" in the demo.py file to your model weight path.
    • Run the demo using the following command:
    python demo.py
    
  • Online:
    • Run the demo and download model weights online with the following command:
    python demo.py -c echo840/Monkey 
    

In order to generate more detailed captions, we provide some prompt examples so that you can conduct more interesting explorations. You can modify these two variables in the caption function to implement different prompt inputs for the caption task, as shown below:

query = "Generate the detailed caption in English: "
chat_query = "Generate the detailed caption in English: "
  • Generate the detailed caption in English.
  • Explain the visual content of the image in great detail.
  • Analyze the image in a comprehensive and detailed manner.
  • Describe the image in as much detail as possible in English without duplicating it.
  • Describe the image in as much detail as possible in English, including as many elements from the image as possible, but without repetition.

Dataset

We have open-sourced the data generated by the multi-level description generation method. You can download it at Detailed Caption.

Evaluate

We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the evaluate_vqa.py file, facilitating a quick verification of results. The specific operations are as follows:

  1. Make sure you have configured the environment.
  2. Modify sys.path.append("pathto/Monkey") to your model weight path.
  3. Prepare the datasets required for evaluation.
  4. Run the evaluation code.

Take ESTVQA as an example:

  • Prepare data according to the following directory structure:
β”œβ”€β”€ data
|	β”œβ”€β”€ estvqa
|		β”œβ”€β”€ test_image
|			β”œβ”€β”€ {image_path0}
|			β”œβ”€β”€ {image_path1}
|				  Β·
|				  Β·
|	β”œβ”€β”€ estvqa.jsonl
  • Example of the format of each line of the annotated .jsonl file:
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
  • Modify the dictionary ds_collections:
ds_collections = {
    'estvqa_test': {
        'test': 'data/estvqa/estvqa.jsonl',
        'metric': 'anls',
        'max_new_tokens': 100,
    },
    ...
}
  • Run the following command:
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'

Train

We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing finetune_ds_debug.sh.

ATTENTION: Specify the path to your training data, which should be a json file consisting of a list of conversations.

Citing Monkey

If you wish to refer to the baseline results published here, please use the following BibTeX entries:

@article{li2023monkey,
  title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
  author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
  journal={arXiv preprint arXiv:2311.06607},
  year={2023}
}

If you find the Monkey cute, please star. It would be a great encouragement for us.

Acknowledgement

Qwen-VL: the codebase we built upon. Thanks for the authors of Qwen for providing the framework.

Copyright

We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!