wikianswers_small / README.md
nikhilchigali's picture
Update README.md
48b6994 verified
|
raw
history blame
1.82 kB
metadata
dataset_info:
  features:
    - name: set
      sequence: string
  splits:
    - name: train
      num_bytes: 1580895690.1875737
      num_examples: 1095326
  download_size: 665292967
  dataset_size: 1580895690.1875737
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - feature-extraction
  - text-classification
  - sentence-similarity
language:
  - en
size_categories:
  - 1M<n<10M

Dataset Card for "WikiAnswers Small"

Dataset Summary

nikhilchigali/wikianswers_small is a subset of the embedding-data/WikiAnswers dataset (Link). This dataset is for the owner's personal use and claims no rights whatsoever. As opposed to the original dataset with 3,386,256 rows, this dataset contains only 4% of the total rows (1,095,326).

Languages

English.

Dataset Structure

Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".

{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}

Usage Example

Install the 🤗 Datasets library with pip install datasets and load the dataset from the Hub with:

from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")

The dataset is loaded as a DatasetDict and has the format for N examples:

DatasetDict({
    train: Dataset({
        features: ['set'],
        num_rows: N
    })
})

Review an example i with:

dataset["train"][i]["set"]

Source Data

  • embedding-data/WikiAnswers on HuggingFace (Link)