Datasets:
license: gfdl
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
configs:
- config_name: raw
data_files:
- split: train
path: raw/train-*
- split: test
path: raw/test-*
- config_name: pair-score
data_files:
- split: train
path: pair-score/train-*
- split: test
path: pair-score/test-*
- config_name: pair-score-hard
data_files:
- split: train
path: pair-score-hard/train-*
- split: test
path: pair-score-hard/test-*
- config_name: triplet
data_files:
- split: train
path: triplet/train-*
- split: test
path: triplet/test-*
- config_name: triplet-hard
data_files:
- split: train
path: triplet-hard/train-*
- split: test
path: triplet-hard/test-*
Wiki Sim
Overview
This new semi-synthetic dataset is derived from wikimedia/wikipedia
.
Each row contains 1-3 references sentences extracted from the original dataset.
For each reference sentence, we use an optimized DSPy program to generate 4 similar sentences:
- Synonym (Replace words with synonyms to maintain the same meaning.)
- Paraphrase (Rephrase the sentence using a different structure while keeping the same idea.)
- Conceptual Overlap (Express a related concept differently without changing the core meaning.)
- Contextual Meaning (Modify the sentence to derive meaning from context, preserving the original intent.)
Additionally, we score each result using cross-encoder/stsb-roberta-large
.
We use this to mine hard negatives from different contiguous sentences in the original passage, retaining the most similar result.
Purpose
We aim to expand training for small models like WordLlama, general embedding models, and targeting benchmarks like stsb and similarity tasks differing from NLI or QnA.
Dataset
The colums of the dataset include:
synonym
paraphrase
conceptual_overlap
contextual_meaning
reference
negative
negative_score
model_id
cross_encoder
synonym_score
paraphrase_score
conceptual_overlap_score
contextual_meaning_score
where reference
and negative
are derived from wikimedia/wikipedia
,
and the similarity text columns are synthetically derived.
We filter all rows where negative scores exceed any of the similarity scores.
Results
The 4 instruction types produce results of varying similarity scores,
with the most similar being synonym
and least similar contextual meaning
.
Subsets
pair-score
- random choice weighted to a target of 0.9pair-score-hard
random choice weighted to a target of 0.85triplet
- random choice weighted to a target of 0.9triplet-hard
- random choice weighted to a target of 0.85raw
- full dataset