Datasets:

system HF staff commited on
Commit
28fdde1
0 Parent(s):

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-label-classification
20
+ - text-classification-legal-topic-classification
21
+ ---
22
+
23
+ # Dataset Card for the EUR-Lex dataset
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for the EUR-Lex dataset](#dataset-card-for-ecthr-cases)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
58
+ - **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/
59
+ - **Paper:** https://www.aclweb.org/anthology/P19-1636/
60
+ - **Leaderboard:** N/A
61
+ - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr)
62
+
63
+ ### Dataset Summary
64
+
65
+ EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.
66
+ EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones:
67
+
68
+ - the header, which includes the title and name of the legal body enforcing the legal act;
69
+ - the recitals, which are legal background references; and
70
+ - the main body, usually organized in articles.
71
+
72
+ **Labeling / Annotation**
73
+
74
+ All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/).
75
+ While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
76
+
77
+
78
+ ### Supported Tasks and Leaderboards
79
+
80
+ The dataset supports:
81
+
82
+ **Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts.
83
+
84
+ **Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.
85
+
86
+ ### Languages
87
+
88
+ All documents are written in English.
89
+
90
+ ## Dataset Structure
91
+
92
+ ### Data Instances
93
+
94
+
95
+ ```json
96
+ {
97
+ "celex_id": "31979D0509",
98
+ "title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain",
99
+ "text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,",
100
+ "eurovoc_concepts": ["192", "2356", "2560", "862", "863"]
101
+ }
102
+ ```
103
+
104
+ ### Data Fields
105
+
106
+ The following data fields are provided for documents (`train`, `dev`, `test`):
107
+
108
+ `celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\
109
+ `title`: (**str**) The title of the document.\
110
+ `text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\
111
+ `eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels).
112
+
113
+ If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl
114
+
115
+ ```python
116
+ import json
117
+ with open('./eurovoc_concepts.jsonl') as jsonl_file:
118
+ eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()}
119
+ ```
120
+
121
+ ### Data Splits
122
+
123
+ | Split | No of Documents | Avg. words | Avg. labels |
124
+ | ------------------- | ------------------------------------ | --- | --- |
125
+ | Train | 45,000 | 729 | 5 |
126
+ |Development | 6,000 | 714 | 5 |
127
+ |Test | 6,000 | 725 | 5 |
128
+
129
+ ## Dataset Creation
130
+
131
+ ### Curation Rationale
132
+
133
+ The dataset was curated by Chalkidis et al. (2019).\
134
+ The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
135
+
136
+ ### Source Data
137
+
138
+ #### Initial Data Collection and Normalization
139
+
140
+ The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format.
141
+ The documents were downloaded from EUR-Lex portal in HTML format.
142
+ The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).
143
+
144
+ #### Who are the source language producers?
145
+
146
+ [More Information Needed]
147
+
148
+ ### Annotations
149
+
150
+ #### Annotation process
151
+
152
+ * The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.
153
+ * The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en).
154
+
155
+
156
+ #### Who are the annotators?
157
+
158
+ Publications Office of EU (https://publications.europa.eu/en)
159
+
160
+ ### Personal and Sensitive Information
161
+
162
+ The dataset does not include personal or sensitive information.
163
+
164
+ ## Considerations for Using the Data
165
+
166
+ ### Social Impact of Dataset
167
+
168
+ [More Information Needed]
169
+
170
+ ### Discussion of Biases
171
+
172
+ [More Information Needed]
173
+
174
+ ### Other Known Limitations
175
+
176
+ [More Information Needed]
177
+
178
+ ## Additional Information
179
+
180
+ ### Dataset Curators
181
+
182
+ Chalkidis et al. (2019)
183
+
184
+ ### Licensing Information
185
+
186
+ © European Union, 1998-2021
187
+
188
+ The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
189
+
190
+ The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
191
+
192
+ Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
193
+ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
194
+
195
+ ### Citation Information
196
+
197
+ *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*
198
+ *Large-Scale Multi-Label Text Classification on EU Legislation.*
199
+ *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*
200
+ ```
201
+ @inproceedings{chalkidis-etal-2019-large,
202
+ title = "Large-Scale Multi-Label Text Classification on {EU} Legislation",
203
+ author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion",
204
+ booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
205
+ year = "2019",
206
+ address = "Florence, Italy",
207
+ publisher = "Association for Computational Linguistics",
208
+ url = "https://www.aclweb.org/anthology/P19-1636",
209
+ doi = "10.18653/v1/P19-1636",
210
+ pages = "6314--6322"
211
+ }
212
+ ```
213
+
214
+ ### Contributions
215
+
216
+ Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eurlex57k": {"description": "EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts.\n", "citation": "@inproceedings{chalkidis-etal-2019-large,\n title = \"Large-Scale Multi-Label Text Classification on {EU} Legislation\",\n author = \"Chalkidis, Ilias and Fergadiotis, Emmanouil and Malakasiotis, Prodromos and Androutsopoulos, Ion\",\n booktitle = \"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics\",\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/P19-1636\",\n doi = \"10.18653/v1/P19-1636\",\n pages = \"6314--6322\"\n}\n", "homepage": "http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/", "license": "CC BY-SA (Creative Commons / Attribution-ShareAlike)", "features": {"celex_id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "eurovoc_concepts": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "eurlex", "config_name": "eurlex57k", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 167603718, "num_examples": 45000, "dataset_name": "eurlex"}, "test": {"name": "test", "num_bytes": 22046706, "num_examples": 6000, "dataset_name": "eurlex"}, "validation": {"name": "validation", "num_bytes": 21942574, "num_examples": 6000, "dataset_name": "eurlex"}}, "download_checksums": {"http://archive.org/download/EURLEX57K/dataset.zip": {"num_bytes": 50289403, "checksum": "dc6830ce8ea49aea049989c4a8253b12d0fb3665fc336d830cb07b9bd28c0f92"}}, "download_size": 50289403, "post_processing_size": null, "dataset_size": 211592998, "size_in_bytes": 261882401}}
dummy/eurlex57k/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87747b71df8c2ce1273f61cda552d5feb55ecc170568f94c332857d06f8489a3
3
+ size 14445
eurlex.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts."""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{chalkidis-etal-2019-large,
26
+ title = "Large-Scale Multi-Label Text Classification on {EU} Legislation",
27
+ author = "Chalkidis, Ilias and Fergadiotis, Emmanouil and Malakasiotis, Prodromos and Androutsopoulos, Ion",
28
+ booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
29
+ year = "2019",
30
+ address = "Florence, Italy",
31
+ publisher = "Association for Computational Linguistics",
32
+ url = "https://www.aclweb.org/anthology/P19-1636",
33
+ doi = "10.18653/v1/P19-1636",
34
+ pages = "6314--6322"
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts.
40
+ """
41
+
42
+ _HOMEPAGE = "http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/"
43
+
44
+ _LICENSE = "CC BY-SA (Creative Commons / Attribution-ShareAlike)"
45
+
46
+ _URLs = {
47
+ "eurlex57k": "http://archive.org/download/EURLEX57K/dataset.zip",
48
+ }
49
+
50
+
51
+ class EURLEX(datasets.GeneratorBasedBuilder):
52
+ """EURLEX57K contains 57k legislative documents in English from EUR-Lex portal, annotated with EUROVOC concepts."""
53
+
54
+ VERSION = datasets.Version("1.1.0")
55
+
56
+ BUILDER_CONFIGS = [
57
+ datasets.BuilderConfig(
58
+ name="eurlex57k", version=VERSION, description="EURLEX57K: Legal Multi-label Text Classification"
59
+ ),
60
+ ]
61
+
62
+ DEFAULT_CONFIG_NAME = "eurlex57k"
63
+
64
+ def _info(self):
65
+ features = datasets.Features(
66
+ {
67
+ "celex_id": datasets.Value("string"),
68
+ "title": datasets.Value("string"),
69
+ "text": datasets.Value("string"),
70
+ "eurovoc_concepts": datasets.features.Sequence(datasets.Value("string")),
71
+ }
72
+ )
73
+ return datasets.DatasetInfo(
74
+ # This is the description that will appear on the datasets page.
75
+ description=_DESCRIPTION,
76
+ # This defines the different columns of the dataset and their types
77
+ features=features, # Here we define them above because they are different between the two configurations
78
+ # If there's a common (input, target) tuple from the features,
79
+ # specify them here. They'll be used if as_supervised=True in
80
+ # builder.as_dataset.
81
+ supervised_keys=None,
82
+ # Homepage of the dataset for documentation
83
+ homepage=_HOMEPAGE,
84
+ # License for the dataset if available
85
+ license=_LICENSE,
86
+ # Citation for the dataset
87
+ citation=_CITATION,
88
+ )
89
+
90
+ def _split_generators(self, dl_manager):
91
+ """Returns SplitGenerators."""
92
+ my_urls = _URLs[self.config.name]
93
+ data_dir = dl_manager.download_and_extract(my_urls)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ # These kwargs will be passed to _generate_examples
98
+ gen_kwargs={
99
+ "filepath": os.path.join(data_dir, "train.jsonl"),
100
+ "split": "train",
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.TEST,
105
+ # These kwargs will be passed to _generate_examples
106
+ gen_kwargs={"filepath": os.path.join(data_dir, "test.jsonl"), "split": "test"},
107
+ ),
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.VALIDATION,
110
+ # These kwargs will be passed to _generate_examples
111
+ gen_kwargs={
112
+ "filepath": os.path.join(data_dir, "dev.jsonl"),
113
+ "split": "dev",
114
+ },
115
+ ),
116
+ ]
117
+
118
+ def _generate_examples(
119
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
120
+ ):
121
+ """ Yields examples as (key, example) tuples. """
122
+
123
+ with open(filepath, encoding="utf-8") as f:
124
+ for id_, row in enumerate(f):
125
+ data = json.loads(row)
126
+ yield id_, {
127
+ "celex_id": data["celex_id"],
128
+ "title": data["title"],
129
+ "text": "\n".join([data["header"], data["recitals"]] + data["main_body"]),
130
+ "eurovoc_concepts": data["eurovoc_concepts"],
131
+ }