Papers
arxiv:2410.11871
TinyClick: Single-Turn Agent for Empowering GUI Automation
Published on Oct 9
Authors:
Abstract
We present a single-turn agent for graphical user interface (GUI) interaction tasks, using Vision-Language Model Florence-2-Base. The agent's primary task is identifying the screen coordinates of the UI element corresponding to the user's command. It demonstrates strong performance on Screenspot and OmniAct, while maintaining a compact size of 0.27B parameters and minimal latency. Relevant improvement comes from multi-task training and MLLM-based data augmentation. Manually annotated corpora are scarce, but we show that MLLM augmentation might produce better results. On Screenspot and OmniAct, our model outperforms both GUI-specific models (e.g., SeeClick) and MLLMs (e.g., GPT-4V).
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2410.11871 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2410.11871 in a Space README.md to link it from this page.