nikhilchigali
commited on
Commit
•
0edb602
1
Parent(s):
c01e7e1
Update README.md
Browse files
README.md
CHANGED
@@ -18,4 +18,54 @@ configs:
|
|
18 |
data_files:
|
19 |
- split: train
|
20 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
data_files:
|
19 |
- split: train
|
20 |
path: data/train-*
|
21 |
+
task_categories:
|
22 |
+
- sentence-similarity
|
23 |
+
language:
|
24 |
+
- en
|
25 |
+
size_categories:
|
26 |
+
- 100K<n<1M
|
27 |
---
|
28 |
+
|
29 |
+
# Dataset Card for "wikianswers_embeddings_512"
|
30 |
+
## Dataset Summary
|
31 |
+
`nikhilchigali/wikianswers_embeddings_512` is a subset of the `embedding-data/WikiAnswers` ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers))
|
32 |
+
As opposed to the original dataset with 3,386,256 rows, this dataset contains only .13% of the total rows(sets). The sets of sentences have been unraveled into individual items with corresponding cluster IDs to identify sentences from the same set. Each Sentence has its associated cluster ID and embeddings of dimension 512.
|
33 |
+
|
34 |
+
## Languages
|
35 |
+
English.
|
36 |
+
|
37 |
+
## Dataset Structure
|
38 |
+
Each example in the dataset contains a sentence and its cluster id of other equivalent sentences. The sentences in the same cluster are paraphrases of each other. The embeddings for the dataset are created using the `distiluse-base-multilingual-cased-v1` model.
|
39 |
+
```
|
40 |
+
{"sentence": [sentence], "cluster": [cluster_id], "embedding_512": [embeddings]}
|
41 |
+
{"sentence": [sentence], "cluster": [cluster_id], "embedding_512": [embeddings]}
|
42 |
+
{"sentence": [sentence], "cluster": [cluster_id], "embedding_512": [embeddings]}
|
43 |
+
...
|
44 |
+
{"sentence": [sentence], "cluster": [cluster_id], "embedding_512": [embeddings]}
|
45 |
+
```
|
46 |
+
### Usage Example
|
47 |
+
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
48 |
+
|
49 |
+
```python
|
50 |
+
from datasets import load_dataset
|
51 |
+
dataset = load_dataset("nikhilchigali/wikianswers_embeddings_512")
|
52 |
+
```
|
53 |
+
|
54 |
+
The dataset is loaded as a DatasetDict and has the format for N examples:
|
55 |
+
|
56 |
+
```python
|
57 |
+
DatasetDict({
|
58 |
+
train: Dataset({
|
59 |
+
features: ['sentence', "cluster", "embedding_512"],
|
60 |
+
num_rows: N
|
61 |
+
})
|
62 |
+
})
|
63 |
+
```
|
64 |
+
|
65 |
+
Review an example i with:
|
66 |
+
```python
|
67 |
+
dataset["train"][i]
|
68 |
+
```
|
69 |
+
## Source Data
|
70 |
+
`embedding-data/WikiAnswers` on HuggingFace ([Link](https://huggingface.co/datasets/embedding-data/WikiAnswers))
|
71 |
+
### Note: This dataset is for the owner's personal use and claims no rights whatsoever.
|