Ferret-UI-Gemma2b / README.md
jadechoghari's picture
Update README.md
a5dd8d6 verified
|
raw
history blame
1.9 kB
metadata
library_name: transformers
pipeline_tag: image-text-to-text

Ferret-UI is the first UI-centric multimodal large language model (MLLM) designed for referring, grounding, and reasoning tasks. Built on Gemma-2B and Llama-3-8B, it is capable of executing complex UI tasks. This is the Gemma-2B version of ferret-ui. It follows from this paper by Apple.

How to Use πŸ€—πŸ“±

You will need first to download builder.py, conversation.py, and inference.py locally.

wget https://huggingface.co/jadechoghari/Ferret-UI-Gemma2b/raw/main/conversation.py
wget https://huggingface.co/jadechoghari/Ferret-UI-Gemma2b/raw/main/builder.py
wget https://huggingface.co/jadechoghari/Ferret-UI-Gemma2b/raw/main/inference.py

Usage:

from inference import infer_ui_task
# Pass an image and the online model path
image_path = 'image.jpg'
model_path = 'jadechoghari/Ferret-UI-Gemma2b'

Task not requiring bounding box

Choose a task from ['widget_listing', 'find_text', 'find_icons', 'find_widget', 'conversation_interaction']

task = 'conversation_interaction'
result = infer_ui_task(image_path, "How do I navigate to the Games tab?", model_path, task)
print("Result:", result)

Task requiring bounding box

Choose a task from ['widgetcaptions', 'taperception', 'ocr', 'icon_recognition', 'widget_classification', 'example_0']

task = 'widgetcaptions' 
region = (50, 50, 200, 200)
result = infer_ui_task(image_path, "Describe the contents of the box.", model_path, task, region=region)
print("Result:", result)

Task with no image processing

Choose a task from ['screen2words', 'detailed_description', 'conversation_perception', 'gpt4']

task = 'detailed_description'
result = infer_ui_task(image_path, "Please describe the screen in detail.", model_path, task)
print("Result:", result)