Datasets:
Tasks:
Text Classification
Sub-tasks:
natural-language-inference
Languages:
English
Size:
10K<n<100K
ArXiv:
License:
add dataset_info in dataset metadata
Browse files
README.md
CHANGED
@@ -19,6 +19,42 @@ task_ids:
|
|
19 |
- natural-language-inference
|
20 |
paperswithcode_id: hans
|
21 |
pretty_name: Heuristic Analysis for NLI Systems
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for "hans"
|
@@ -185,4 +221,4 @@ The data fields are the same among all splits.
|
|
185 |
|
186 |
### Contributions
|
187 |
|
188 |
-
Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
|
|
|
19 |
- natural-language-inference
|
20 |
paperswithcode_id: hans
|
21 |
pretty_name: Heuristic Analysis for NLI Systems
|
22 |
+
dataset_info:
|
23 |
+
features:
|
24 |
+
- name: premise
|
25 |
+
dtype: string
|
26 |
+
- name: hypothesis
|
27 |
+
dtype: string
|
28 |
+
- name: label
|
29 |
+
dtype:
|
30 |
+
class_label:
|
31 |
+
names:
|
32 |
+
0: entailment
|
33 |
+
1: non-entailment
|
34 |
+
- name: parse_premise
|
35 |
+
dtype: string
|
36 |
+
- name: parse_hypothesis
|
37 |
+
dtype: string
|
38 |
+
- name: binary_parse_premise
|
39 |
+
dtype: string
|
40 |
+
- name: binary_parse_hypothesis
|
41 |
+
dtype: string
|
42 |
+
- name: heuristic
|
43 |
+
dtype: string
|
44 |
+
- name: subcase
|
45 |
+
dtype: string
|
46 |
+
- name: template
|
47 |
+
dtype: string
|
48 |
+
config_name: plain_text
|
49 |
+
splits:
|
50 |
+
- name: train
|
51 |
+
num_bytes: 15916371
|
52 |
+
num_examples: 30000
|
53 |
+
- name: validation
|
54 |
+
num_bytes: 15893137
|
55 |
+
num_examples: 30000
|
56 |
+
download_size: 30947358
|
57 |
+
dataset_size: 31809508
|
58 |
---
|
59 |
|
60 |
# Dataset Card for "hans"
|
|
|
221 |
|
222 |
### Contributions
|
223 |
|
224 |
+
Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
|