dataset_info:
features:
- name: sentence
dtype: string
- name: cluster
dtype: int64
- name: embedding_384
sequence: float32
splits:
- name: train
num_bytes: 1584641313
num_examples: 990526
download_size: 2163024841
dataset_size: 1584641313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
Dataset Card for "wikianswers_embeddings_384"
Dataset Summary
nikhilchigali/wikianswers_embeddings_384
is a subset of the embedding-data/WikiAnswers
(Link)
As opposed to the original dataset with 3,386,256 rows, this dataset contains only .13% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set. Each Sentence has its associated cluster ID and embeddings of dimension 384.
Languages
English.
Dataset Structure
Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other. The embeddings for the dataset are created using the all-MiniLM-L6-v2
model.
{"sentence": [sentence], "cluster": [cluster_id], "embedding_384": [embeddings]}
{"sentence": [sentence], "cluster": [cluster_id], "embedding_384": [embeddings]}
{"sentence": [sentence], "cluster": [cluster_id], "embedding_384": [embeddings]}
...
{"sentence": [sentence], "cluster": [cluster_id], "embedding_384": [embeddings]}
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("nikhilchigali/wikianswers_embeddings_384")
The dataset is loaded as a DatasetDict and has the format for N examples:
DatasetDict({
train: Dataset({
features: ['sentence', "cluster", "embedding_384"],
num_rows: N
})
})
Review an example i with:
dataset["train"][i]
Source Data
embedding-data/WikiAnswers
on HuggingFace (Link)