File size: 2,042 Bytes
6a2d404
8d64048
 
 
 
 
 
 
 
6a2d404
 
ffbf9ba
 
 
 
6a2d404
 
ffbf9ba
 
 
 
6a2d404
 
 
 
 
 
48b6994
 
 
 
0e23bba
 
48b6994
 
 
 
 
0e23bba
48b6994
0e23bba
 
 
48b6994
0e23bba
48b6994
 
 
 
 
0e23bba
48b6994
 
 
 
 
0e23bba
48b6994
 
 
 
 
 
8744b46
48b6994
 
0e23bba
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
language:
- en
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- text-classification
- sentence-similarity
dataset_info:
  features:
  - name: sentence
    dtype: string
  - name: cluster
    dtype: int64
  splits:
  - name: train
    num_bytes: 59231273
    num_examples: 990526
  download_size: 22602562
  dataset_size: 59231273
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for "WikiAnswers Small"

## Dataset Summary
`nikhilchigali/wikianswers_small` is a subset of the `embedding-data/WikiAnswers` dataset ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers)).  
As opposed to the original dataset with `3,386,256` rows, this dataset contains only 4% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set.

## Languages
English.

## Dataset Structure
Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other.
```
{"sentence": [sentence], "cluster": [cluster_id]}
{"sentence": [sentence], "cluster": [cluster_id]}
{"sentence": [sentence], "cluster": [cluster_id]}
...
{"sentence": [sentence], "cluster": [cluster_id]}
```
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("nikhilchigali/wikianswers_small")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
    train: Dataset({
        features: ['sentence', "cluster"],
        num_rows: N
    })
})
```
Review an example `i` with:
```python
dataset["train"][i]
```
### Source Data
* `embedding-data/WikiAnswers` on HuggingFace ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers))

#### Note: This dataset is for the owner's personal use and claims no rights whatsoever.