--- library_name: transformers pipeline_tag: image-text-to-text --- Ferret-UI is the first UI-centric multimodal large language model (MLLM) designed for referring, grounding, and reasoning tasks. Built on Gemma-2B and Llama-3-8B, it is capable of executing complex UI tasks. This is the **Llama-3-8B** version of ferret-ui. It follows from [this paper](https://arxiv.org/pdf/2404.05719) by Apple. ## How to Use 🤗📱 You will need first to download `builder.py`, `conversation.py`, and `inference.py` locally. ```bash wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/conversation.py wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/builder.py wget https://huggingface.co/jadechoghari/ferret-gemma/raw/main/inference.py ``` ### Usage: ```python from inference import infer_ui_task # Pass an image and the online model path image_path = 'image.jpg' model_path = 'jadechoghari/Ferret-UI-Llama8b' ``` ### Task requiring bounding box Choose a task from ['widgetcaptions', 'taperception', 'ocr', 'icon_recognition', 'widget_classification', 'example_0'] ```python task = 'widgetcaptions' region = (50, 50, 200, 200) result = infer_ui_task(image_path, "Describe the contents of the box.", model_path, task, region=region) print("Result:", result) ``` ### Task not requiring bounding box Choose a task from ['widget_listing', 'find_text', 'find_icons', 'find_widget', 'conversation_interaction'] ```python task = 'conversation_interaction' result = infer_ui_task(image_path, "How do I navigate to the Games tab?", model_path, task) print("Result:", result) ``` ### Task with no image processing Choose a task from ['screen2words', 'detailed_description', 'conversation_perception', 'gpt4'] ```python task = 'detailed_description' result = infer_ui_task(image_path, "Please describe the screen in detail.", model_path, task) print("Result:", result) ```