File size: 1,185 Bytes
474e2ca f8ffc00 474e2ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text1
dtype: string
- name: text2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '-1'
'1': '1'
splits:
- name: train
num_bytes: 150266647.47592032
num_examples: 50712
- name: test
num_bytes: 64403801.52407967
num_examples: 21735
download_size: 129675237
dataset_size: 214670449.0
---
# Dataset Card for "WikiMedical_sentence_similarity"
WikiMedical_sentence_similarity is an adapted and ready-to-use sentence similarity dataset based on [this dataset](https://huggingface.co/datasets/gamino/wiki_medical_terms).
The preprocessing followed three steps:
- Each text is splitted into sentences of 256 tokens (nltk tokenizer)
- Each sentence is paired with a positive pair if found, and a negative one. Negative one are drawn randomly in the whole dataset.
- Train and test split correspond to 70%/30%
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |