nikhilchigali's picture
Librarian Bot: Add language metadata for dataset (#1)
37fef78 verified
metadata
language:
  - en
dataset_info:
  features:
    - name: sentence
      dtype: string
    - name: cluster
      dtype: int64
    - name: embedding_768
      sequence: float32
  splits:
    - name: train
      num_bytes: 3106089249
      num_examples: 990526
  download_size: 3686385268
  dataset_size: 3106089249
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for "wikianswers_embeddings_768"

Dataset Summary

nikhilchigali/wikianswers_embeddings_768 is a subset of the embedding-data/WikiAnswers (Link) As opposed to the original dataset with 3,386,256 rows, this dataset contains only .13% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set. Each Sentence has its associated cluster ID and embeddings of dimension 768.

Languages

English.

Dataset Structure

Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other. The embeddings for the dataset are created using the all-distilroberta-v1 model.

{"sentence": [sentence], "cluster": [cluster_id], "embedding_768": [embeddings]}
{"sentence": [sentence], "cluster": [cluster_id], "embedding_768": [embeddings]}
{"sentence": [sentence], "cluster": [cluster_id], "embedding_768": [embeddings]}
...
{"sentence": [sentence], "cluster": [cluster_id], "embedding_768": [embeddings]}

Usage Example

Install the 🤗 Datasets library with pip install datasets and load the dataset from the Hub with:

from datasets import load_dataset
dataset = load_dataset("nikhilchigali/wikianswers_embeddings_768")

The dataset is loaded as a DatasetDict and has the format for N examples:

DatasetDict({
    train: Dataset({
        features: ['sentence', "cluster", "embedding_768"],
        num_rows: N
    })
})

Review an example i with:

dataset["train"][i]

Source Data

embedding-data/WikiAnswers on HuggingFace (Link)

Note: This dataset is for the owner's personal use and claims no rights whatsoever.