Search is not available for this dataset
sentence1
imagewidth (px) 448
448
| sentence2
imagewidth (px) 448
448
| score
float64 0
5
|
---|---|---|
4 |
||
3.75 |
||
2.8 |
||
3.4 |
||
2.4 |
||
1.333 |
||
4.6 |
||
3.8 |
||
4.2 |
||
2.6 |
||
4.4 |
||
4.2 |
||
5 |
||
5 |
||
5 |
||
3.4 |
||
5 |
||
4 |
||
3.2 |
||
5 |
||
4.4 |
||
3.6 |
||
3.6 |
||
0.8 |
||
1.4 |
||
3.2 |
||
3.8 |
||
4.2 |
||
3 |
||
3.6 |
||
3 |
||
3 |
||
3 |
||
4.8 |
||
2.5 |
||
2.25 |
||
3.5 |
||
3.5 |
||
3 |
||
3 |
||
3.667 |
||
1.667 |
||
3.6 |
||
3.8 |
||
3.4 |
||
3.4 |
||
3.8 |
||
2.8 |
||
3.75 |
||
4.4 |
||
3.75 |
||
2.75 |
||
3.75 |
||
2.25 |
||
3.8 |
||
3.2 |
||
3.8 |
||
4.8 |
||
4 |
||
1.4 |
||
4.2 |
||
1.8 |
||
4.2 |
||
2 |
||
2.6 |
||
3.25 |
||
5 |
||
3 |
||
3.75 |
||
4.25 |
||
4.75 |
||
3.25 |
||
3.75 |
||
3.25 |
||
3.8 |
||
5 |
||
3.6 |
||
3.8 |
||
2 |
||
3.4 |
||
4.2 |
||
3.4 |
||
3 |
||
3 |
||
3.25 |
||
2.75 |
||
2.6 |
||
3.4 |
||
3.6 |
||
3.6 |
||
4.8 |
||
4 |
||
3.6 |
||
3.4 |
||
2.4 |
||
4.222 |
||
3 |
||
3 |
||
2.75 |
||
4 |
End of preview. Expand
in Dataset Viewer.
Dataset Summary
This dataset is rendered to images from STS-12. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images.
Examples of Use
Load test split:
from datasets import load_dataset
dataset = load_dataset("Pixel-Linguist/rendered-sts12", split="test")
Languages
English-only; for multilingual and cross-lingual datasets, see Pixel-Linguist/rendered-stsb
and Pixel-Linguist/rendered-sts17
Citation Information
@article{xiao2024pixel,
title={Pixel Sentence Representation Learning},
author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al},
journal={arXiv preprint arXiv:2402.08183},
year={2024}
}
- Downloads last month
- 77