nikhilchigali
commited on
Commit
•
0e23bba
1
Parent(s):
4e5d4d3
Update README.md
Browse files
README.md
CHANGED
@@ -29,31 +29,32 @@ configs:
|
|
29 |
# Dataset Card for "WikiAnswers Small"
|
30 |
|
31 |
## Dataset Summary
|
32 |
-
`nikhilchigali/wikianswers_small` is a subset of the `embedding-data/WikiAnswers` dataset ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers)).
|
33 |
-
As opposed to the original dataset with `3,386,256` rows, this dataset contains only 4% of the total rows
|
34 |
|
35 |
## Languages
|
36 |
English.
|
37 |
|
38 |
## Dataset Structure
|
39 |
-
Each example in the dataset contains
|
40 |
```
|
41 |
-
{"
|
42 |
-
{"
|
|
|
43 |
...
|
44 |
-
{"
|
45 |
```
|
46 |
### Usage Example
|
47 |
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
48 |
```python
|
49 |
from datasets import load_dataset
|
50 |
-
dataset = load_dataset("
|
51 |
```
|
52 |
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
|
53 |
```python
|
54 |
DatasetDict({
|
55 |
train: Dataset({
|
56 |
-
features: ['
|
57 |
num_rows: N
|
58 |
})
|
59 |
})
|
@@ -63,4 +64,6 @@ Review an example `i` with:
|
|
63 |
dataset["train"][i]["set"]
|
64 |
```
|
65 |
### Source Data
|
66 |
-
* `embedding-data/WikiAnswers` on HuggingFace ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers))
|
|
|
|
|
|
29 |
# Dataset Card for "WikiAnswers Small"
|
30 |
|
31 |
## Dataset Summary
|
32 |
+
`nikhilchigali/wikianswers_small` is a subset of the `embedding-data/WikiAnswers` dataset ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers)).
|
33 |
+
As opposed to the original dataset with `3,386,256` rows, this dataset contains only 4% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set.
|
34 |
|
35 |
## Languages
|
36 |
English.
|
37 |
|
38 |
## Dataset Structure
|
39 |
+
Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other.
|
40 |
```
|
41 |
+
{"sentence": [sentence], "cluster": [cluster_id]}
|
42 |
+
{"sentence": [sentence], "cluster": [cluster_id]}
|
43 |
+
{"sentence": [sentence], "cluster": [cluster_id]}
|
44 |
...
|
45 |
+
{"sentence": [sentence], "cluster": [cluster_id]}
|
46 |
```
|
47 |
### Usage Example
|
48 |
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
49 |
```python
|
50 |
from datasets import load_dataset
|
51 |
+
dataset = load_dataset("nikhilchigali/wikianswers_small")
|
52 |
```
|
53 |
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
|
54 |
```python
|
55 |
DatasetDict({
|
56 |
train: Dataset({
|
57 |
+
features: ['sentence', "cluster"],
|
58 |
num_rows: N
|
59 |
})
|
60 |
})
|
|
|
64 |
dataset["train"][i]["set"]
|
65 |
```
|
66 |
### Source Data
|
67 |
+
* `embedding-data/WikiAnswers` on HuggingFace ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers))
|
68 |
+
|
69 |
+
#### Note: This dataset is for the owner's personal use and claims no rights whatsoever.
|