Datasets:
metadata
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- machine-generated
language:
- sv
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-sts-b
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Swedish Machine Translated STS-B
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
config_name: plain_text
splits:
- name: test
num_bytes: 171823
num_examples: 1379
- name: validation
num_bytes: 218843
num_examples: 1500
- name: train
num_bytes: 772847
num_examples: 5749
download_size: 383047
dataset_size: 1163513
Dataset Card for Swedish Machine Translated STS-B
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: stsb-mt-sv homepage
- Repository: stsb-mt-sv repository
- Paper: Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity
- Point of Contact: Tim Isbister
Dataset Summary
This dataset is a Swedish machine translated version for semantic textual similarity.
Supported Tasks and Leaderboards
This dataset can be used to evaluate text similarity on Swedish.
Languages
The text in the dataset is in Swedish. The associated BCP-47 code is sv
.
Dataset Structure
Data Instances
What a sample looks like:
{'score': '4.2',
'sentence1': 'Undrar om jultomten kommer i år pga Corona..?',
'sentence2': 'Jag undrar om jultomen kommer hit i år med tanke på covid-19',
}
Data Fields
score
: a float representing the semantic similarity score. Where 0.0 is the lowest score and 5.0 is the highest.sentence1
: a string representing a textsentence2
: another string to compare the semantic with
Data Splits
The data is split into a training, validation and test set. The final split sizes are as follow:
Train | Valid | Test |
---|---|---|
5749 | 1500 | 1379 |
Dataset Creation
Curation Rationale
[Needs More Information]
Source Data
Initial Data Collection and Normalization
[Needs More Information]
Who are the source language producers?
[Needs More Information]
Annotations
Annotation process
[Needs More Information]
Who are the annotators?
[Needs More Information]
Personal and Sensitive Information
[Needs More Information]
Considerations for Using the Data
Social Impact of Dataset
[Needs More Information]
Discussion of Biases
[Needs More Information]
Other Known Limitations
[Needs More Information]
Additional Information
Dataset Curators
The machine translated version were put together by @timpal0l
Licensing Information
[Needs More Information]
Citation Information
@article{isbister2020not,
title={Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity},
author={Isbister, Tim and Sahlgren, Magnus},
journal={arXiv preprint arXiv:2009.03116},
year={2020}
}
Contributions
Thanks to @timpal0l for adding this dataset.