Datasets:
kietnt0603
commited on
Commit
•
e5ca4e3
1
Parent(s):
4e553d1
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,30 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language:
|
4 |
+
- am
|
5 |
+
- ha
|
6 |
+
- en
|
7 |
+
- es
|
8 |
+
- te
|
9 |
+
- ar
|
10 |
+
- af
|
11 |
+
tags:
|
12 |
+
- Semantic Textual Relatedness
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
---
|
16 |
+
|
17 |
+
# Dataset Card for Dataset Name
|
18 |
+
|
19 |
+
## Dataset Details
|
20 |
+
|
21 |
+
### Dataset Description
|
22 |
+
|
23 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
24 |
+
Each instance in the training, development, and test sets is a sentence pair. The instance is labeled with a score representing the degree of semantic textual relatedness between the two sentences. The scores can range from 0 (maximally unrelated) to 1 (maximally related). These gold label scores have been determined through manual annotation. Specifically, a comparative annotation approach was used to avoid known limitations of traditional rating scale annotation methods. This comparative annotation process (which avoids several biases of traditional rating scales) led to a high reliability of the final relatedness rankings. Further details about the task, the method of data annotation, how STR is different from semantic textual similarity, applications of semantic textual relatedness, etc. can be found in this paper.
|
25 |
+
|
26 |
+
### Dataset Sources
|
27 |
+
|
28 |
+
<!-- Provide the basic links for the dataset. -->
|
29 |
+
|
30 |
+
- **Repository:** [https://github.com/semantic-textual-relatedness/Semantic_Relatedness_SemEval2024/tree/main]
|