Datasets:
Tasks:
Token Classification
Modalities:
Text
Sub-tasks:
named-entity-recognition
Languages:
Spanish
Size:
10K - 100K
License:
upload dataset
Browse files- .gitattributes +3 -0
- README.md +132 -0
- dev.conll +3 -0
- pharmaconer.py +133 -0
- test.conll +3 -0
- train.conll +3 -0
.gitattributes
CHANGED
@@ -25,3 +25,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
dev.conll filter=lfs diff=lfs merge=lfs -text
|
29 |
+
test.conll filter=lfs diff=lfs merge=lfs -text
|
30 |
+
train.conll filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
languages:
|
5 |
+
- es
|
6 |
+
multilinguality:
|
7 |
+
- monolingual
|
8 |
+
task_categories:
|
9 |
+
- text-classification
|
10 |
+
- multi-label-text-classification
|
11 |
+
task_ids:
|
12 |
+
- named-entity-recognition
|
13 |
+
---
|
14 |
+
|
15 |
+
# PharmaCoNER Corpus
|
16 |
+
|
17 |
+
## BibTeX citation
|
18 |
+
If you use these resources in your work, please cite the following paper:
|
19 |
+
|
20 |
+
```bibtex
|
21 |
+
TO DO
|
22 |
+
```
|
23 |
+
|
24 |
+
## Digital Object Identifier (DOI) and access to dataset files
|
25 |
+
|
26 |
+
https://zenodo.org/record/4270158#.YTnXP0MzY0F
|
27 |
+
|
28 |
+
## Introduction
|
29 |
+
|
30 |
+
TO DO: This is a dataset for Named Entity Recognition (NER) from...
|
31 |
+
|
32 |
+
### Supported Tasks and Leaderboards
|
33 |
+
|
34 |
+
Named Entities Recognition, Language Model
|
35 |
+
|
36 |
+
### Languages
|
37 |
+
|
38 |
+
ES - Spanish
|
39 |
+
|
40 |
+
### Directory structure
|
41 |
+
|
42 |
+
* pharmaconer.py
|
43 |
+
* dev.conll
|
44 |
+
* test.conll
|
45 |
+
* train.conll
|
46 |
+
* README.md
|
47 |
+
|
48 |
+
## Dataset Structure
|
49 |
+
|
50 |
+
### Data Instances
|
51 |
+
|
52 |
+
Three four-column files, one for each split.
|
53 |
+
|
54 |
+
### Data Fields
|
55 |
+
|
56 |
+
Every file has four columns:
|
57 |
+
* 1st column: Word form or punctuation symbol
|
58 |
+
* 2nd column: Original BRAT file name
|
59 |
+
* 3rd column: Spans
|
60 |
+
* 4th column: IOB tag
|
61 |
+
|
62 |
+
### Example:
|
63 |
+
<pre>
|
64 |
+
La S0004-06142006000900008-1 123_125 O
|
65 |
+
paciente S0004-06142006000900008-1 126_134 O
|
66 |
+
tenía S0004-06142006000900008-1 135_140 O
|
67 |
+
antecedentes S0004-06142006000900008-1 141_153 O
|
68 |
+
de S0004-06142006000900008-1 154_156 O
|
69 |
+
hipotiroidismo S0004-06142006000900008-1 157_171 O
|
70 |
+
, S0004-06142006000900008-1 171_172 O
|
71 |
+
hipertensión S0004-06142006000900008-1 173_185 O
|
72 |
+
arterial S0004-06142006000900008-1 186_194 O
|
73 |
+
en S0004-06142006000900008-1 195_197 O
|
74 |
+
tratamiento S0004-06142006000900008-1 198_209 O
|
75 |
+
habitual S0004-06142006000900008-1 210_218 O
|
76 |
+
con S0004-06142006000900008-1 219-222 O
|
77 |
+
atenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES
|
78 |
+
y S0004-06142006000900008-1 232_233 O
|
79 |
+
enalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES
|
80 |
+
</pre>
|
81 |
+
|
82 |
+
### Data Splits
|
83 |
+
|
84 |
+
* train: 8,074 tokens
|
85 |
+
* development: 3,764 tokens
|
86 |
+
* test: 3,931 tokens
|
87 |
+
|
88 |
+
## Dataset Creation
|
89 |
+
|
90 |
+
### Methodology
|
91 |
+
|
92 |
+
TO DO
|
93 |
+
|
94 |
+
### Curation Rationale
|
95 |
+
|
96 |
+
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
|
97 |
+
|
98 |
+
### Source Data
|
99 |
+
|
100 |
+
#### Initial Data Collection and Normalization
|
101 |
+
|
102 |
+
TO DO
|
103 |
+
|
104 |
+
#### Who are the source language producers?
|
105 |
+
|
106 |
+
TO DO
|
107 |
+
|
108 |
+
### Annotations
|
109 |
+
|
110 |
+
#### Annotation process
|
111 |
+
|
112 |
+
TO DO
|
113 |
+
|
114 |
+
#### Who are the annotators?
|
115 |
+
|
116 |
+
TO DO
|
117 |
+
|
118 |
+
### Dataset Curators
|
119 |
+
|
120 |
+
TO DO: Martin?
|
121 |
+
|
122 |
+
### Personal and Sensitive Information
|
123 |
+
|
124 |
+
No personal or sensitive information included.
|
125 |
+
|
126 |
+
## Contact
|
127 |
+
|
128 |
+
TO DO: Casimiro?
|
129 |
+
|
130 |
+
## License
|
131 |
+
|
132 |
+
<a rel="license" href="https://creativecommons.org/licenses/by/4.0/"><img alt="Attribution 4.0 International License" style="border-width:0" src="https://chriszabriskie.com/img/cc-by.png" width="100"/></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
|
dev.conll
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:928f7286deba5b62c27d34c385fea08ff37c042870ad449ee861dbc27962e2f4
|
3 |
+
size 4216843
|
pharmaconer.py
ADDED
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Loading script for the PharmaCoNER dataset.
|
2 |
+
import datasets
|
3 |
+
|
4 |
+
|
5 |
+
logger = datasets.logging.get_logger(__name__)
|
6 |
+
|
7 |
+
|
8 |
+
_CITATION = """\
|
9 |
+
@inproceedings{gonzalez-agirre-etal-2019-pharmaconer,
|
10 |
+
title = "{P}harma{C}o{NER}: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
|
11 |
+
author = "Gonzalez-Agirre, Aitor and
|
12 |
+
Marimon, Montserrat and
|
13 |
+
Intxaurrondo, Ander and
|
14 |
+
Rabal, Obdulia and
|
15 |
+
Villegas, Marta and
|
16 |
+
Krallinger, Martin",
|
17 |
+
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
|
18 |
+
month = nov,
|
19 |
+
year = "2019",
|
20 |
+
address = "Hong Kong, China",
|
21 |
+
publisher = "Association for Computational Linguistics",
|
22 |
+
url = "https://aclanthology.org/D19-5701",
|
23 |
+
doi = "10.18653/v1/D19-5701",
|
24 |
+
pages = "1--10",
|
25 |
+
abstract = "One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",
|
26 |
+
}
|
27 |
+
"""
|
28 |
+
|
29 |
+
_DESCRIPTION = """\
|
30 |
+
https://temu.bsc.es/pharmaconer/
|
31 |
+
"""
|
32 |
+
|
33 |
+
_URL = "https://huggingface.co/datasets/BSC-TeMU/pharmaconer/resolve/main/"
|
34 |
+
# _URL = "./"
|
35 |
+
_TRAINING_FILE = "train.conll"
|
36 |
+
_DEV_FILE = "dev.conll"
|
37 |
+
_TEST_FILE = "test.conll"
|
38 |
+
|
39 |
+
class PharmaCoNERConfig(datasets.BuilderConfig):
|
40 |
+
"""BuilderConfig for PharmaCoNER dataset"""
|
41 |
+
|
42 |
+
def __init__(self, **kwargs):
|
43 |
+
"""BuilderConfig for PharmaCoNER.
|
44 |
+
|
45 |
+
Args:
|
46 |
+
**kwargs: keyword arguments forwarded to super.
|
47 |
+
"""
|
48 |
+
super(PharmaCoNERConfig, self).__init__(**kwargs)
|
49 |
+
|
50 |
+
|
51 |
+
class PharmaCoNER(datasets.GeneratorBasedBuilder):
|
52 |
+
"""PharmaCoNER dataset."""
|
53 |
+
|
54 |
+
BUILDER_CONFIGS = [
|
55 |
+
PharmaCoNERConfig(
|
56 |
+
name="PharmaCoNER",
|
57 |
+
version=datasets.Version("1.0.0"),
|
58 |
+
description="PharmaCoNER dataset"),
|
59 |
+
]
|
60 |
+
|
61 |
+
def _info(self):
|
62 |
+
return datasets.DatasetInfo(
|
63 |
+
description=_DESCRIPTION,
|
64 |
+
features=datasets.Features(
|
65 |
+
{
|
66 |
+
"id": datasets.Value("string"),
|
67 |
+
"tokens": datasets.Sequence(datasets.Value("string")),
|
68 |
+
"ner_tags": datasets.Sequence(
|
69 |
+
datasets.features.ClassLabel(
|
70 |
+
names=[
|
71 |
+
"O",
|
72 |
+
"B-NO_NORMALIZABLES",
|
73 |
+
"B-NORMALIZABLES",
|
74 |
+
"B-PROTEINAS",
|
75 |
+
"B-UNCLEAR",
|
76 |
+
"I-NO_NORMALIZABLES",
|
77 |
+
"I-NORMALIZABLES",
|
78 |
+
"I-PROTEINAS",
|
79 |
+
"I-UNCLEAR",
|
80 |
+
]
|
81 |
+
)
|
82 |
+
),
|
83 |
+
}
|
84 |
+
),
|
85 |
+
supervised_keys=None,
|
86 |
+
homepage="https://temu.bsc.es/pharmaconer/",
|
87 |
+
citation=_CITATION,
|
88 |
+
)
|
89 |
+
|
90 |
+
def _split_generators(self, dl_manager):
|
91 |
+
"""Returns SplitGenerators."""
|
92 |
+
urls_to_download = {
|
93 |
+
"train": f"{_URL}{_TRAINING_FILE}",
|
94 |
+
"dev": f"{_URL}{_DEV_FILE}",
|
95 |
+
"test": f"{_URL}{_TEST_FILE}",
|
96 |
+
}
|
97 |
+
downloaded_files = dl_manager.download_and_extract(urls_to_download)
|
98 |
+
|
99 |
+
return [
|
100 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
|
101 |
+
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
|
102 |
+
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
|
103 |
+
]
|
104 |
+
|
105 |
+
def _generate_examples(self, filepath):
|
106 |
+
logger.info("⏳ Generating examples from = %s", filepath)
|
107 |
+
with open(filepath, encoding="utf-8") as f:
|
108 |
+
guid = 0
|
109 |
+
tokens = []
|
110 |
+
pos_tags = []
|
111 |
+
ner_tags = []
|
112 |
+
for line in f:
|
113 |
+
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
|
114 |
+
if tokens:
|
115 |
+
yield guid, {
|
116 |
+
"id": str(guid),
|
117 |
+
"tokens": tokens,
|
118 |
+
"ner_tags": ner_tags,
|
119 |
+
}
|
120 |
+
guid += 1
|
121 |
+
tokens = []
|
122 |
+
ner_tags = []
|
123 |
+
else:
|
124 |
+
# PharmaCoNER tokens are tab separated
|
125 |
+
splits = line.split("\t")
|
126 |
+
tokens.append(splits[0])
|
127 |
+
ner_tags.append(splits[-1].rstrip())
|
128 |
+
# last example
|
129 |
+
yield guid, {
|
130 |
+
"id": str(guid),
|
131 |
+
"tokens": tokens,
|
132 |
+
"ner_tags": ner_tags,
|
133 |
+
}
|
test.conll
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e1be9198db6f1d6ff24be10fdb95102e866f0bd462b804057f9a28a708003b37
|
3 |
+
size 4386177
|
train.conll
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:675aa1b5a2a5424d44c0dcaa197227b61bc8ef11cd0099f9ded0564a9bc59007
|
3 |
+
size 8830998
|