Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
oottyy commited on
Commit
cc7948c
1 Parent(s): f3cb5c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
  license: odc-by
3
  ---
4
- #### Mind2Web training set for the paper: [Harnessing Webpage Uis For Text Rich Visual Understanding]()
5
 
6
- 🌐 [Homepage](https://neulab.github.io/MultiUI/) | 🐍 [GitHub](https://github.com/neulab/multiui) | 📖 [arXiv]()
7
 
8
  ## Introduction
9
  We introduce **MultiUI**, a dataset containing 7.3 million samples from 1 million websites, covering diverse multi- modal tasks and UI layouts. Models trained on **MultiUI** not only excel in web UI tasks—achieving up to a 48% improvement on VisualWebBench and a 19.1% boost in action accuracy on a web agent dataset Mind2Web—but also generalize surprisingly well to non-web UI tasks and even to non-UI domains, such as document understanding, OCR, and chart interpretation.
@@ -16,4 +16,15 @@ We introduce **MultiUI**, a dataset containing 7.3 million samples from 1 millio
16
  * Xiang Yue: xyue2@andrew.cmu.edu
17
 
18
  ## Citation
19
- If you find this work helpful, please cite out paper:
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: odc-by
3
  ---
4
+ #### Mind2Web training set for the paper: [Harnessing Webpage Uis For Text Rich Visual Understanding](https://arxiv.org/abs/2410.13824)
5
 
6
+ 🌐 [Homepage](https://neulab.github.io/MultiUI/) | 🐍 [GitHub](https://github.com/neulab/multiui) | 📖 [arXiv](https://arxiv.org/abs/2410.13824)
7
 
8
  ## Introduction
9
  We introduce **MultiUI**, a dataset containing 7.3 million samples from 1 million websites, covering diverse multi- modal tasks and UI layouts. Models trained on **MultiUI** not only excel in web UI tasks—achieving up to a 48% improvement on VisualWebBench and a 19.1% boost in action accuracy on a web agent dataset Mind2Web—but also generalize surprisingly well to non-web UI tasks and even to non-UI domains, such as document understanding, OCR, and chart interpretation.
 
16
  * Xiang Yue: xyue2@andrew.cmu.edu
17
 
18
  ## Citation
19
+ If you find this work helpful, please cite out paper:
20
+ ````
21
+ @misc{liu2024harnessingwebpageuistextrich,
22
+ title={Harnessing Webpage UIs for Text-Rich Visual Understanding},
23
+ author={Junpeng Liu and Tianyue Ou and Yifan Song and Yuxiao Qu and Wai Lam and Chenyan Xiong and Wenhu Chen and Graham Neubig and Xiang Yue},
24
+ year={2024},
25
+ eprint={2410.13824},
26
+ archivePrefix={arXiv},
27
+ primaryClass={cs.CV},
28
+ url={https://arxiv.org/abs/2410.13824},
29
+ }
30
+ ````