wiki-sim / README.md
dleemiller's picture
Update README.md
62910c0 verified
|
raw
history blame
1.68 kB
metadata
license: gfdl
task_categories:
  - sentence-similarity
language:
  - en
size_categories:
  - 100K<n<1M

Wiki Sim

Overview

This semi-synthetic dataset is derived from wikimedia/wikipedia. Each row contains 1-3 references sentences extracted from the original dataset.

For each reference sentence, we use an optimized DSPy program to generate 4 similar sentences:

  • Synonym (Replace words with synonyms to maintain the same meaning.)
  • Paraphrase (Rephrase the sentence using a different structure while keeping the same idea.)
  • Conceptual Overlap (Express a related concept differently without changing the core meaning.)
  • Contextual Meaning (Modify the sentence to derive meaning from context, preserving the original intent.)

Additionally, we score each result using cross-encoder/stsb-roberta-large. We use this to mine hard negatives from different contiguous sentences in the original passage, retaining the most similar result.

Purpose

We aim to expand training for small models like WordLlama, general embedding models, and targeting benchmarks like stsb and similarity tasks differing from NLI or QnA.

Dataset

The colums of the dataset include:

synonym paraphrase conceptual_overlap contextual_meaning reference negative negative_score model_id cross_encoder synonym_score paraphrase_score conceptual_overlap_score contextual_meaning_score

where reference and negative are derived from wikimedia/wikipedia, and the similarity text columns are synthetically derived.

We filter all rows where negative scores exceed any of the similarity scores.

Results