--- dataset_info: features: - name: sentence1 dtype: image - name: sentence2 dtype: image - name: score dtype: float64 splits: - name: train num_bytes: 34324588.5 num_examples: 2234 - name: test num_bytes: 36379487.0 num_examples: 3108 download_size: 53812322 dataset_size: 70704075.5 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- ### Dataset Summary This dataset is rendered to images from STS-12. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images. **Examples of Use** Load test split: ```python from datasets import load_dataset dataset = load_dataset("Pixel-Linguist/rendered-sts12", split="test") ``` ### Languages English-only; for multilingual and cross-lingual datasets, see `Pixel-Linguist/rendered-stsb` and `Pixel-Linguist/rendered-sts17` ### Citation Information ``` @article{xiao2024pixel, title={Pixel Sentence Representation Learning}, author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al}, journal={arXiv preprint arXiv:2402.08183}, year={2024} } ```