metadata
language:
- en
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- text-classification
- sentence-similarity
dataset_info:
features:
- name: sentence
dtype: string
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 1820698240
num_examples: 30472041
download_size: 695740262
dataset_size: 1820698240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for "WikiAnswers Small"
Dataset Summary
nikhilchigali/wikianswers_small
is a subset of the embedding-data/WikiAnswers
dataset (Link).
As opposed to the original dataset with 3,386,256
rows, this dataset contains only 4% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set.
Languages
English.
Dataset Structure
Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other.
{"sentence": [sentence], "cluster": [cluster_id]}
{"sentence": [sentence], "cluster": [cluster_id]}
{"sentence": [sentence], "cluster": [cluster_id]}
...
{"sentence": [sentence], "cluster": [cluster_id]}
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("nikhilchigali/wikianswers_small")
The dataset is loaded as a DatasetDict
and has the format for N
examples:
DatasetDict({
train: Dataset({
features: ['sentence', "cluster"],
num_rows: N
})
})
Review an example i
with:
dataset["train"][i]
Source Data
embedding-data/WikiAnswers
on HuggingFace (Link)