asahi417 commited on
Commit
0c1f7b6
1 Parent(s): 6836726
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ cache
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - other
6
+ multilinguality:
7
+ - monolingual
8
+ size_categories:
9
+ - 1K<n<10K
10
+ pretty_name: SemEval2012 task 2 Relational Similarity
11
+ ---
12
+ # Dataset Card for "relbert/semeval2012_relational_similarity_v3"
13
+ ## Dataset Description
14
+ - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
15
+ - **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
16
+ - **Dataset:** SemEval2012: Relational Similarity
17
+
18
+ ### Dataset Summary
19
+
20
+ ***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
21
+ but with a different dataset construction.
22
+
23
+ Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
24
+ The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
25
+ The relation types are constructed on top of following 10 parent relation types.
26
+ ```shell
27
+ {
28
+ 1: "Class Inclusion", # Hypernym
29
+ 2: "Part-Whole", # Meronym, Substance Meronym
30
+ 3: "Similar", # Synonym, Co-hypornym
31
+ 4: "Contrast", # Antonym
32
+ 5: "Attribute", # Attribute, Event
33
+ 6: "Non Attribute",
34
+ 7: "Case Relation",
35
+ 8: "Cause-Purpose",
36
+ 9: "Space-Time",
37
+ 10: "Representation"
38
+ }
39
+ ```
40
+ Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
41
+
42
+
43
+ ## Dataset Structure
44
+ ### Data Instances
45
+ An example of `train` looks as follows.
46
+ ```
47
+ {
48
+ 'relation_type': '8d',
49
+ 'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
50
+ 'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
51
+ }
52
+ ```
53
+
54
+ ### Data Splits
55
+ | name |train|validation|
56
+ |---------|----:|---------:|
57
+ |semeval2012_relational_similarity| 89 | 89|
58
+
59
+
60
+ ### Number of Positive/Negative Word-pairs in each Split
61
+
62
+ | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
63
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
64
+ | 1 | 50 | 740 | 63 | 826 |
65
+ | 10 | 60 | 730 | 66 | 823 |
66
+ | 10a | 10 | 799 | 14 | 894 |
67
+ | 10b | 10 | 797 | 13 | 893 |
68
+ | 10c | 10 | 800 | 11 | 898 |
69
+ | 10d | 10 | 799 | 10 | 898 |
70
+ | 10e | 10 | 795 | 8 | 896 |
71
+ | 10f | 10 | 799 | 10 | 898 |
72
+ | 1a | 10 | 797 | 14 | 892 |
73
+ | 1b | 10 | 797 | 14 | 892 |
74
+ | 1c | 10 | 800 | 11 | 898 |
75
+ | 1d | 10 | 797 | 16 | 890 |
76
+ | 1e | 10 | 794 | 8 | 895 |
77
+ | 2 | 100 | 690 | 117 | 772 |
78
+ | 2a | 10 | 799 | 15 | 893 |
79
+ | 2b | 10 | 796 | 11 | 894 |
80
+ | 2c | 10 | 798 | 13 | 894 |
81
+ | 2d | 10 | 798 | 10 | 897 |
82
+ | 2e | 10 | 799 | 11 | 897 |
83
+ | 2f | 10 | 802 | 11 | 900 |
84
+ | 2g | 10 | 796 | 16 | 889 |
85
+ | 2h | 10 | 799 | 11 | 897 |
86
+ | 2i | 10 | 800 | 9 | 900 |
87
+ | 2j | 10 | 801 | 10 | 900 |
88
+ | 3 | 80 | 710 | 80 | 809 |
89
+ | 3a | 10 | 799 | 11 | 897 |
90
+ | 3b | 10 | 802 | 11 | 900 |
91
+ | 3c | 10 | 798 | 12 | 895 |
92
+ | 3d | 10 | 798 | 14 | 893 |
93
+ | 3e | 10 | 802 | 5 | 906 |
94
+ | 3f | 10 | 803 | 11 | 901 |
95
+ | 3g | 10 | 801 | 6 | 904 |
96
+ | 3h | 10 | 801 | 10 | 900 |
97
+ | 4 | 80 | 710 | 82 | 807 |
98
+ | 4a | 10 | 802 | 11 | 900 |
99
+ | 4b | 10 | 797 | 7 | 899 |
100
+ | 4c | 10 | 800 | 12 | 897 |
101
+ | 4d | 10 | 796 | 4 | 901 |
102
+ | 4e | 10 | 802 | 12 | 899 |
103
+ | 4f | 10 | 802 | 9 | 902 |
104
+ | 4g | 10 | 798 | 15 | 892 |
105
+ | 4h | 10 | 801 | 12 | 898 |
106
+ | 5 | 90 | 700 | 105 | 784 |
107
+ | 5a | 10 | 798 | 14 | 893 |
108
+ | 5b | 10 | 801 | 8 | 902 |
109
+ | 5c | 10 | 799 | 11 | 897 |
110
+ | 5d | 10 | 797 | 15 | 891 |
111
+ | 5e | 10 | 801 | 8 | 902 |
112
+ | 5f | 10 | 801 | 11 | 899 |
113
+ | 5g | 10 | 802 | 9 | 902 |
114
+ | 5h | 10 | 800 | 15 | 894 |
115
+ | 5i | 10 | 800 | 14 | 895 |
116
+ | 6 | 80 | 710 | 99 | 790 |
117
+ | 6a | 10 | 798 | 15 | 892 |
118
+ | 6b | 10 | 801 | 11 | 899 |
119
+ | 6c | 10 | 801 | 13 | 897 |
120
+ | 6d | 10 | 804 | 10 | 903 |
121
+ | 6e | 10 | 801 | 11 | 899 |
122
+ | 6f | 10 | 799 | 12 | 896 |
123
+ | 6g | 10 | 798 | 12 | 895 |
124
+ | 6h | 10 | 799 | 15 | 893 |
125
+ | 7 | 80 | 710 | 91 | 798 |
126
+ | 7a | 10 | 800 | 14 | 895 |
127
+ | 7b | 10 | 796 | 7 | 898 |
128
+ | 7c | 10 | 797 | 11 | 895 |
129
+ | 7d | 10 | 800 | 14 | 895 |
130
+ | 7e | 10 | 797 | 10 | 896 |
131
+ | 7f | 10 | 796 | 12 | 893 |
132
+ | 7g | 10 | 794 | 9 | 894 |
133
+ | 7h | 10 | 795 | 14 | 890 |
134
+ | 8 | 80 | 710 | 90 | 799 |
135
+ | 8a | 10 | 797 | 14 | 892 |
136
+ | 8b | 10 | 801 | 7 | 903 |
137
+ | 8c | 10 | 796 | 12 | 893 |
138
+ | 8d | 10 | 796 | 13 | 892 |
139
+ | 8e | 10 | 796 | 11 | 894 |
140
+ | 8f | 10 | 797 | 12 | 894 |
141
+ | 8g | 10 | 793 | 7 | 895 |
142
+ | 8h | 10 | 798 | 14 | 893 |
143
+ | 9 | 90 | 700 | 96 | 793 |
144
+ | 9a | 10 | 795 | 14 | 890 |
145
+ | 9b | 10 | 799 | 12 | 896 |
146
+ | 9c | 10 | 790 | 7 | 892 |
147
+ | 9d | 10 | 803 | 9 | 903 |
148
+ | 9e | 10 | 804 | 8 | 905 |
149
+ | 9f | 10 | 799 | 10 | 898 |
150
+ | 9g | 10 | 796 | 14 | 891 |
151
+ | 9h | 10 | 799 | 13 | 895 |
152
+ | 9i | 10 | 799 | 9 | 899 |
153
+ | SUM | 1580 | 70207 | 1778 | 78820 |
154
+
155
+ ### Citation Information
156
+ ```
157
+ @inproceedings{jurgens-etal-2012-semeval,
158
+ title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
159
+ author = "Jurgens, David and
160
+ Mohammad, Saif and
161
+ Turney, Peter and
162
+ Holyoak, Keith",
163
+ booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
164
+ month = "7-8 " # jun,
165
+ year = "2012",
166
+ address = "Montr{\'e}al, Canada",
167
+ publisher = "Association for Computational Linguistics",
168
+ url = "https://aclanthology.org/S12-1047",
169
+ pages = "356--364",
170
+ }
171
+ ```
dataset/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
dataset/valid.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
get_stats.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from datasets import load_dataset
3
+
4
+ data = load_dataset('relbert/semeval2012_relational_similarity_v3')
5
+ stats = []
6
+ for k in data.keys():
7
+ for i in data[k]:
8
+ stats.append({'relation_type': i['relation_type'], 'split': k, 'positives': len(i['positives']), 'negatives': len(i['negatives'])})
9
+ df = pd.DataFrame(stats)
10
+ df_train = df[df['split'] == 'train']
11
+ df_valid = df[df['split'] == 'validation']
12
+ stats = []
13
+ for r in df['relation_type'].unique():
14
+ _df_t = df_train[df_train['relation_type'] == r]
15
+ _df_v = df_valid[df_valid['relation_type'] == r]
16
+ stats.append({
17
+ 'relation_type': r,
18
+ 'positive (train)': 0 if len(_df_t) == 0 else _df_t['positives'].values[0],
19
+ 'negative (train)': 0 if len(_df_t) == 0 else _df_t['negatives'].values[0],
20
+ 'positive (validation)': 0 if len(_df_v) == 0 else _df_v['positives'].values[0],
21
+ 'negative (validation)': 0 if len(_df_v) == 0 else _df_v['negatives'].values[0],
22
+ })
23
+
24
+ df = pd.DataFrame(stats).sort_values(by=['relation_type'])
25
+ df.index = df.pop('relation_type')
26
+ sum_pairs = df.sum(0)
27
+ df = df.T
28
+ df['SUM'] = sum_pairs
29
+ df = df.T
30
+
31
+ df.to_csv('stats.csv')
32
+ with open('stats.md', 'w') as f:
33
+ f.write(df.to_markdown())
34
+
35
+
36
+
process.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import tarfile
4
+ import zipfile
5
+ import gzip
6
+ import requests
7
+
8
+ from glob import glob
9
+ import gdown
10
+
11
+
12
+ def wget(url, cache_dir: str = './cache', gdrive_filename: str = None):
13
+ """ wget and uncompress data_iterator """
14
+ os.makedirs(cache_dir, exist_ok=True)
15
+ if url.startswith('https://drive.google.com'):
16
+ assert gdrive_filename is not None, 'please provide fileaname for gdrive download'
17
+ gdown.download(url, f'{cache_dir}/{gdrive_filename}', quiet=False)
18
+ filename = gdrive_filename
19
+ else:
20
+ filename = os.path.basename(url)
21
+ with open(f'{cache_dir}/{filename}', "wb") as f:
22
+ r = requests.get(url)
23
+ f.write(r.content)
24
+ path = f'{cache_dir}/{filename}'
25
+
26
+ if path.endswith('.tar.gz') or path.endswith('.tgz') or path.endswith('.tar'):
27
+ if path.endswith('.tar'):
28
+ tar = tarfile.open(path)
29
+ else:
30
+ tar = tarfile.open(path, "r:gz")
31
+ tar.extractall(cache_dir)
32
+ tar.close()
33
+ os.remove(path)
34
+ elif path.endswith('.zip'):
35
+ with zipfile.ZipFile(path, 'r') as zip_ref:
36
+ zip_ref.extractall(cache_dir)
37
+ os.remove(path)
38
+ elif path.endswith('.gz'):
39
+ with gzip.open(path, 'rb') as f:
40
+ with open(path.replace('.gz', ''), 'wb') as f_write:
41
+ f_write.write(f.read())
42
+ os.remove(path)
43
+
44
+
45
+ def get_training_data(return_validation_set: bool = False):
46
+ """ Get RelBERT training data
47
+
48
+ Returns
49
+ -------
50
+ pairs: dictionary of list (positive pairs, negative pairs)
51
+ {'1b': [[0.6, ('office', 'desk'), ..], [[-0.1, ('aaa', 'bbb'), ...]]
52
+ """
53
+ top_n = 10
54
+ cache_dir = 'cache'
55
+ os.makedirs(cache_dir, exist_ok=True)
56
+ remove_relation = None
57
+ path_answer = f'{cache_dir}/Phase2Answers'
58
+ path_scale = f'{cache_dir}/Phase2AnswersScaled'
59
+ url = 'https://drive.google.com/u/0/uc?id=0BzcZKTSeYL8VYWtHVmxUR3FyUmc&export=download'
60
+ filename = 'SemEval-2012-Platinum-Ratings.tar.gz'
61
+ if not (os.path.exists(path_scale) and os.path.exists(path_answer)):
62
+ wget(url, gdrive_filename=filename, cache_dir=cache_dir)
63
+ files_answer = [os.path.basename(i) for i in glob(f'{path_answer}/*.txt')]
64
+ files_scale = [os.path.basename(i) for i in glob(f'{path_scale}/*.txt')]
65
+ assert files_answer == files_scale, f'files are not matched: {files_scale} vs {files_answer}'
66
+ positives = {}
67
+ negatives = {}
68
+ all_relation_type = {}
69
+ positives_score = {}
70
+ # score_range = [90.0, 88.7] # the absolute value of max/min prototypicality rating
71
+ for i in files_scale:
72
+ relation_id = i.split('-')[-1].replace('.txt', '')
73
+ if remove_relation and int(relation_id[:-1]) in remove_relation:
74
+ continue
75
+ with open(f'{path_answer}/{i}', 'r') as f:
76
+ lines_answer = [_l.replace('"', '').split('\t') for _l in f.read().split('\n')
77
+ if not _l.startswith('#') and len(_l)]
78
+ relation_type = list(set(list(zip(*lines_answer))[-1]))
79
+ assert len(relation_type) == 1, relation_type
80
+ relation_type = relation_type[0]
81
+ with open(f'{path_scale}/{i}', 'r') as f:
82
+ # list of tuple [score, ("a", "b")]
83
+ scales = [[float(_l[:5]), _l[6:].replace('"', '')] for _l in f.read().split('\n')
84
+ if not _l.startswith('#') and len(_l)]
85
+ scales = sorted(scales, key=lambda _x: _x[0])
86
+ # positive pairs are in the reverse order of prototypicality score
87
+ positive_pairs = [[s, tuple(p.split(':'))] for s, p in filter(lambda _x: _x[0] > 0, scales)]
88
+ positive_pairs = sorted(positive_pairs, key=lambda x: x[0], reverse=True)
89
+ if return_validation_set:
90
+ positive_pairs = positive_pairs[min(top_n, len(positive_pairs)):]
91
+ if len(positive_pairs) == 0:
92
+ continue
93
+ else:
94
+ positive_pairs = positive_pairs[:min(top_n, len(positive_pairs))]
95
+ positives_score[relation_id] = positive_pairs
96
+ positives[relation_id] = list(list(zip(*positive_pairs))[1])
97
+ negatives[relation_id] = [tuple(p.split(':')) for s, p in filter(lambda _x: _x[0] < 0, scales)]
98
+ all_relation_type[relation_id] = relation_type
99
+ parent = list(set([i[:-1] for i in all_relation_type.keys()]))
100
+
101
+
102
+ # 1st level relation contrast (among parent relations)
103
+ relation_pairs_1st = []
104
+ for p in parent:
105
+ child_positive = list(filter(lambda x: x.startswith(p), list(all_relation_type.keys())))
106
+ child_negative = list(filter(lambda x: not x.startswith(p), list(all_relation_type.keys())))
107
+ positive_pairs = []
108
+ negative_pairs = []
109
+ for c in child_positive:
110
+ positive_pairs += positives[c]
111
+ # negative_pairs += negatives[c]
112
+ for c in child_negative:
113
+ negative_pairs += positives[c]
114
+ # negative_pairs += negatives[c]
115
+ relation_pairs_1st += [{
116
+ "positives": positive_pairs, "negatives": negative_pairs, "relation_type": p, "level": "parent"
117
+ }]
118
+
119
+ # 2nd level relation contrast (among child relations) & 3rd level relation contrast (within child relations)
120
+ relation_pairs_2nd = []
121
+ relation_pairs_3rd = []
122
+ for p in all_relation_type.keys():
123
+ positive_pairs = positives[p]
124
+ negative_pairs = negatives[p]
125
+ for n in all_relation_type.keys():
126
+ if p == n:
127
+ continue
128
+ negative_pairs += positives[n]
129
+ relation_pairs_2nd += [{
130
+ "positives": positive_pairs, "negatives": negative_pairs, "relation_type": p, "level": "child"
131
+ }]
132
+
133
+ for n, anchor in enumerate(positive_pairs):
134
+ for _n, posi in enumerate(positive_pairs):
135
+ if n < _n:
136
+ negative_pairs = positive_pairs[_n+1:]
137
+ if len(negative_pairs) > 0:
138
+ relation_pairs_3rd += [{
139
+ "positives": [(anchor, posi)],
140
+ "negatives": [(anchor, neg) for neg in negative_pairs],
141
+ "relation_type": p,
142
+ "level": "child_prototypical"
143
+ }]
144
+
145
+ return relation_pairs_1st + relation_pairs_2nd + relation_pairs_3rd
146
+
147
+
148
+ if __name__ == '__main__':
149
+ data_train = get_training_data(return_validation_set=False)
150
+ with open('dataset/train.jsonl', 'w') as f_writer:
151
+ f_writer.write('\n'.join([json.dumps(i) for i in data_train]))
152
+ data_valid = get_training_data(return_validation_set=True)
153
+ with open('dataset/valid.jsonl', 'w') as f_writer:
154
+ f_writer.write('\n'.join([json.dumps(i) for i in data_valid]))
semeval2012_relational_similarity.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ logger = datasets.logging.get_logger(__name__)
5
+ _DESCRIPTION = """[SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/)"""
6
+ _NAME = "semeval2012_relational_similarity_v3"
7
+ _VERSION = "1.0.0"
8
+ _CITATION = """
9
+ @inproceedings{jurgens-etal-2012-semeval,
10
+ title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
11
+ author = "Jurgens, David and
12
+ Mohammad, Saif and
13
+ Turney, Peter and
14
+ Holyoak, Keith",
15
+ booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
16
+ month = "7-8 " # jun,
17
+ year = "2012",
18
+ address = "Montr{\'e}al, Canada",
19
+ publisher = "Association for Computational Linguistics",
20
+ url = "https://aclanthology.org/S12-1047",
21
+ pages = "356--364",
22
+ }
23
+ """
24
+
25
+ _HOME_PAGE = "https://github.com/asahi417/relbert"
26
+ _URL = f'https://huggingface.co/datasets/relbert/{_NAME}/raw/main/dataset'
27
+ _URLS = {
28
+ str(datasets.Split.TRAIN): [f'{_URL}/train.jsonl'],
29
+ str(datasets.Split.VALIDATION): [f'{_URL}/valid.jsonl'],
30
+ }
31
+
32
+
33
+ class SemEVAL2012RelationalSimilarityV3Config(datasets.BuilderConfig):
34
+ """BuilderConfig"""
35
+
36
+ def __init__(self, **kwargs):
37
+ """BuilderConfig.
38
+ Args:
39
+ **kwargs: keyword arguments forwarded to super.
40
+ """
41
+ super(SemEVAL2012RelationalSimilarityV3Config, self).__init__(**kwargs)
42
+
43
+
44
+ class SemEVAL2012RelationalSimilarityV3(datasets.GeneratorBasedBuilder):
45
+ """Dataset."""
46
+
47
+ BUILDER_CONFIGS = [
48
+ SemEVAL2012RelationalSimilarityV3Config(
49
+ name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION
50
+ ),
51
+ ]
52
+
53
+ def _split_generators(self, dl_manager):
54
+ downloaded_file = dl_manager.download_and_extract(_URLS)
55
+ return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
56
+ for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION]]
57
+
58
+ def _generate_examples(self, filepaths):
59
+ _key = 0
60
+ for filepath in filepaths:
61
+ logger.info(f"generating examples from = {filepath}")
62
+ with open(filepath, encoding="utf-8") as f:
63
+ _list = [i for i in f.read().split('\n') if len(i) > 0]
64
+ for i in _list:
65
+ data = json.loads(i)
66
+ yield _key, data
67
+ _key += 1
68
+
69
+ def _info(self):
70
+ return datasets.DatasetInfo(
71
+ description=_DESCRIPTION,
72
+ features=datasets.Features(
73
+ {
74
+ "relation_type": datasets.Value("string"),
75
+ "positives": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
76
+ "negatives": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
77
+ }
78
+ ),
79
+ supervised_keys=None,
80
+ homepage=_HOME_PAGE,
81
+ citation=_CITATION,
82
+ )
stats.csv ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ relation_type,positive (train),negative (train),positive (validation),negative (validation)
2
+ 1,50,740,63,826
3
+ 10,60,730,66,823
4
+ 10a,10,799,14,894
5
+ 10b,10,797,13,893
6
+ 10c,10,800,11,898
7
+ 10d,10,799,10,898
8
+ 10e,10,795,8,896
9
+ 10f,10,799,10,898
10
+ 1a,10,797,14,892
11
+ 1b,10,797,14,892
12
+ 1c,10,800,11,898
13
+ 1d,10,797,16,890
14
+ 1e,10,794,8,895
15
+ 2,100,690,117,772
16
+ 2a,10,799,15,893
17
+ 2b,10,796,11,894
18
+ 2c,10,798,13,894
19
+ 2d,10,798,10,897
20
+ 2e,10,799,11,897
21
+ 2f,10,802,11,900
22
+ 2g,10,796,16,889
23
+ 2h,10,799,11,897
24
+ 2i,10,800,9,900
25
+ 2j,10,801,10,900
26
+ 3,80,710,80,809
27
+ 3a,10,799,11,897
28
+ 3b,10,802,11,900
29
+ 3c,10,798,12,895
30
+ 3d,10,798,14,893
31
+ 3e,10,802,5,906
32
+ 3f,10,803,11,901
33
+ 3g,10,801,6,904
34
+ 3h,10,801,10,900
35
+ 4,80,710,82,807
36
+ 4a,10,802,11,900
37
+ 4b,10,797,7,899
38
+ 4c,10,800,12,897
39
+ 4d,10,796,4,901
40
+ 4e,10,802,12,899
41
+ 4f,10,802,9,902
42
+ 4g,10,798,15,892
43
+ 4h,10,801,12,898
44
+ 5,90,700,105,784
45
+ 5a,10,798,14,893
46
+ 5b,10,801,8,902
47
+ 5c,10,799,11,897
48
+ 5d,10,797,15,891
49
+ 5e,10,801,8,902
50
+ 5f,10,801,11,899
51
+ 5g,10,802,9,902
52
+ 5h,10,800,15,894
53
+ 5i,10,800,14,895
54
+ 6,80,710,99,790
55
+ 6a,10,798,15,892
56
+ 6b,10,801,11,899
57
+ 6c,10,801,13,897
58
+ 6d,10,804,10,903
59
+ 6e,10,801,11,899
60
+ 6f,10,799,12,896
61
+ 6g,10,798,12,895
62
+ 6h,10,799,15,893
63
+ 7,80,710,91,798
64
+ 7a,10,800,14,895
65
+ 7b,10,796,7,898
66
+ 7c,10,797,11,895
67
+ 7d,10,800,14,895
68
+ 7e,10,797,10,896
69
+ 7f,10,796,12,893
70
+ 7g,10,794,9,894
71
+ 7h,10,795,14,890
72
+ 8,80,710,90,799
73
+ 8a,10,797,14,892
74
+ 8b,10,801,7,903
75
+ 8c,10,796,12,893
76
+ 8d,10,796,13,892
77
+ 8e,10,796,11,894
78
+ 8f,10,797,12,894
79
+ 8g,10,793,7,895
80
+ 8h,10,798,14,893
81
+ 9,90,700,96,793
82
+ 9a,10,795,14,890
83
+ 9b,10,799,12,896
84
+ 9c,10,790,7,892
85
+ 9d,10,803,9,903
86
+ 9e,10,804,8,905
87
+ 9f,10,799,10,898
88
+ 9g,10,796,14,891
89
+ 9h,10,799,13,895
90
+ 9i,10,799,9,899
91
+ SUM,1580,70207,1778,78820
stats.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
2
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
3
+ | 1 | 50 | 740 | 63 | 826 |
4
+ | 10 | 60 | 730 | 66 | 823 |
5
+ | 10a | 10 | 799 | 14 | 894 |
6
+ | 10b | 10 | 797 | 13 | 893 |
7
+ | 10c | 10 | 800 | 11 | 898 |
8
+ | 10d | 10 | 799 | 10 | 898 |
9
+ | 10e | 10 | 795 | 8 | 896 |
10
+ | 10f | 10 | 799 | 10 | 898 |
11
+ | 1a | 10 | 797 | 14 | 892 |
12
+ | 1b | 10 | 797 | 14 | 892 |
13
+ | 1c | 10 | 800 | 11 | 898 |
14
+ | 1d | 10 | 797 | 16 | 890 |
15
+ | 1e | 10 | 794 | 8 | 895 |
16
+ | 2 | 100 | 690 | 117 | 772 |
17
+ | 2a | 10 | 799 | 15 | 893 |
18
+ | 2b | 10 | 796 | 11 | 894 |
19
+ | 2c | 10 | 798 | 13 | 894 |
20
+ | 2d | 10 | 798 | 10 | 897 |
21
+ | 2e | 10 | 799 | 11 | 897 |
22
+ | 2f | 10 | 802 | 11 | 900 |
23
+ | 2g | 10 | 796 | 16 | 889 |
24
+ | 2h | 10 | 799 | 11 | 897 |
25
+ | 2i | 10 | 800 | 9 | 900 |
26
+ | 2j | 10 | 801 | 10 | 900 |
27
+ | 3 | 80 | 710 | 80 | 809 |
28
+ | 3a | 10 | 799 | 11 | 897 |
29
+ | 3b | 10 | 802 | 11 | 900 |
30
+ | 3c | 10 | 798 | 12 | 895 |
31
+ | 3d | 10 | 798 | 14 | 893 |
32
+ | 3e | 10 | 802 | 5 | 906 |
33
+ | 3f | 10 | 803 | 11 | 901 |
34
+ | 3g | 10 | 801 | 6 | 904 |
35
+ | 3h | 10 | 801 | 10 | 900 |
36
+ | 4 | 80 | 710 | 82 | 807 |
37
+ | 4a | 10 | 802 | 11 | 900 |
38
+ | 4b | 10 | 797 | 7 | 899 |
39
+ | 4c | 10 | 800 | 12 | 897 |
40
+ | 4d | 10 | 796 | 4 | 901 |
41
+ | 4e | 10 | 802 | 12 | 899 |
42
+ | 4f | 10 | 802 | 9 | 902 |
43
+ | 4g | 10 | 798 | 15 | 892 |
44
+ | 4h | 10 | 801 | 12 | 898 |
45
+ | 5 | 90 | 700 | 105 | 784 |
46
+ | 5a | 10 | 798 | 14 | 893 |
47
+ | 5b | 10 | 801 | 8 | 902 |
48
+ | 5c | 10 | 799 | 11 | 897 |
49
+ | 5d | 10 | 797 | 15 | 891 |
50
+ | 5e | 10 | 801 | 8 | 902 |
51
+ | 5f | 10 | 801 | 11 | 899 |
52
+ | 5g | 10 | 802 | 9 | 902 |
53
+ | 5h | 10 | 800 | 15 | 894 |
54
+ | 5i | 10 | 800 | 14 | 895 |
55
+ | 6 | 80 | 710 | 99 | 790 |
56
+ | 6a | 10 | 798 | 15 | 892 |
57
+ | 6b | 10 | 801 | 11 | 899 |
58
+ | 6c | 10 | 801 | 13 | 897 |
59
+ | 6d | 10 | 804 | 10 | 903 |
60
+ | 6e | 10 | 801 | 11 | 899 |
61
+ | 6f | 10 | 799 | 12 | 896 |
62
+ | 6g | 10 | 798 | 12 | 895 |
63
+ | 6h | 10 | 799 | 15 | 893 |
64
+ | 7 | 80 | 710 | 91 | 798 |
65
+ | 7a | 10 | 800 | 14 | 895 |
66
+ | 7b | 10 | 796 | 7 | 898 |
67
+ | 7c | 10 | 797 | 11 | 895 |
68
+ | 7d | 10 | 800 | 14 | 895 |
69
+ | 7e | 10 | 797 | 10 | 896 |
70
+ | 7f | 10 | 796 | 12 | 893 |
71
+ | 7g | 10 | 794 | 9 | 894 |
72
+ | 7h | 10 | 795 | 14 | 890 |
73
+ | 8 | 80 | 710 | 90 | 799 |
74
+ | 8a | 10 | 797 | 14 | 892 |
75
+ | 8b | 10 | 801 | 7 | 903 |
76
+ | 8c | 10 | 796 | 12 | 893 |
77
+ | 8d | 10 | 796 | 13 | 892 |
78
+ | 8e | 10 | 796 | 11 | 894 |
79
+ | 8f | 10 | 797 | 12 | 894 |
80
+ | 8g | 10 | 793 | 7 | 895 |
81
+ | 8h | 10 | 798 | 14 | 893 |
82
+ | 9 | 90 | 700 | 96 | 793 |
83
+ | 9a | 10 | 795 | 14 | 890 |
84
+ | 9b | 10 | 799 | 12 | 896 |
85
+ | 9c | 10 | 790 | 7 | 892 |
86
+ | 9d | 10 | 803 | 9 | 903 |
87
+ | 9e | 10 | 804 | 8 | 905 |
88
+ | 9f | 10 | 799 | 10 | 898 |
89
+ | 9g | 10 | 796 | 14 | 891 |
90
+ | 9h | 10 | 799 | 13 | 895 |
91
+ | 9i | 10 | 799 | 9 | 899 |
92
+ |:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
93
+ | SUM | 1580 | 70207 | 1778 | 78820 |