|
--- |
|
license: mit |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- code |
|
pretty_name: OmniACT |
|
--- |
|
<img src="intro.png" width="700" title="OmniACT"> |
|
|
|
Dataset for [OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web](https://arxiv.org/abs/2402.17553) |
|
|
|
Splits: |
|
|
|
| split_name | count | |
|
|------------|-------| |
|
| train | 6788 | |
|
| test | 2020 | |
|
| val | 991 | |
|
|
|
Example datapoint: |
|
```json |
|
"2849": { |
|
"task": "data/tasks/desktop/ibooks/task_1.30.txt", |
|
"image": "data/data/desktop/ibooks/screen_1.png", |
|
"ocr": "ocr/desktop/ibooks/screen_1.json", |
|
"color": "detact_color/desktop/ibooks/screen_1.json", |
|
"icon": "detact_icon/desktop/ibooks/screen_1.json", |
|
"box": "data/metadata/desktop/boxes/ibooks/screen_1.json" |
|
}, |
|
``` |
|
|
|
where: |
|
|
|
- `task` - contains natural language description ("Task") along with the corresponding PyAutoGUI code ("Output Script"): |
|
```text |
|
Task: Navigate to see the upcoming titles |
|
Output Script: |
|
pyautogui.moveTo(1881.5,1116.0) |
|
``` |
|
- `image` - screen image where the action is performed |
|
|
|
<img src="screen_1.png" width="700" title="example screen image"> |
|
|
|
- `ocr` - OCR of screen image (all text elements extracted from the screen image) |
|
```json |
|
{ |
|
ocr: |
|
{ |
|
"Book": [ |
|
1838.0, |
|
52.5 |
|
], |
|
"Store": [ |
|
253.0, |
|
394.5 |
|
], |
|
"Browse": [ |
|
2845.5, |
|
1323.0 |
|
], |
|
"Sections": [ |
|
3227.0, |
|
52.5 |
|
], |
|
"v": [ |
|
3298.5, |
|
53.0 |
|
], |
|
"Q": [ |
|
47.5, |
|
138.5 |
|
], |
|
"Apple": [ |
|
73.5, |
|
230.5 |
|
], |
|
"Books": [ |
|
165.5, |
|
987.5 |
|
], |
|
"Top": [ |
|
516.5, |
|
227.0 |
|
], |
|
"Charts": [ |
|
622.0, |
|
224.5 |
|
], |
|
... |
|
} |
|
``` |
|
- `color` and `icon` - list of UI elements along with their positions |
|
```json |
|
{ |
|
"ctrl": [[2319.5, 2037.5], [2278.5, 1886.5], [2125.5, 1887.0], ...] |
|
"filter4": [[2319.5, 2037.5], [2278.5, 1886.5], ...] |
|
"calendar-empty": [[2319.5, 2037.5], [2278.5, 1886.5], [2125.5, 1887.0], ...] |
|
"volume-mute3": [[2319.5, 2037.5], [2278.5, 1886.5], [2125.5, 1887.0], ...] |
|
"command": [[2319.5, 2037.5], [2278.5, 1886.5], [2125.5, 1887.0], ...] |
|
"film3": [[2319.5, 2037.5], [2278.5, 1886.5], [2125.5, 1887.0], |
|
"insert-template": [[2125.5, 1887.0], [2258.5, 1887.0], ...] |
|
"grid7": [[2296.5, 1894.5], [2125.5, 1884.5], [2099.5, 1868.5], [622.5, 1953.5]], |
|
"map5": [[622.5, 1953.5]] |
|
} |
|
``` |
|
- `box` - bounding boxes of key UI elements |
|
```json |
|
{ |
|
{ |
|
"example_0": { |
|
"top_left": [ |
|
482, |
|
1232 |
|
], |
|
"bottom_right": [ |
|
1001, |
|
1516 |
|
], |
|
"label": "browse_mystery" |
|
}, |
|
"example_1": { |
|
"top_left": [ |
|
1053, |
|
1235 |
|
], |
|
"bottom_right": [ |
|
1572, |
|
1519 |
|
], |
|
"label": "browse_kids" |
|
}, |
|
"example_2": { |
|
"top_left": [ |
|
1622, |
|
1237 |
|
], |
|
"bottom_right": [ |
|
2141, |
|
1521 |
|
], |
|
"label": "browse_non_fiction" |
|
}, |
|
"example_3": { |
|
"top_left": [ |
|
2191, |
|
1238 |
|
], |
|
"bottom_right": [ |
|
2710, |
|
1522 |
|
], |
|
"label": "browse_romance" |
|
}, |
|
... |
|
} |
|
|
|
``` |
|
To cite OmniACT, please use: |
|
``` |
|
@misc{kapoor2024omniact, |
|
title={OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web}, |
|
author={Raghav Kapoor and Yash Parag Butala and Melisa Russak and Jing Yu Koh and Kiran Kamble and Waseem Alshikh and Ruslan Salakhutdinov}, |
|
year={2024}, |
|
eprint={2402.17553}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI} |
|
} |
|
``` |