tlemberger
commited on
Commit
•
a0eface
1
Parent(s):
9281402
reverting to dataset sd-nlp
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ tags:
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
-
- EMBO/sd-
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
@@ -15,7 +15,7 @@ metrics:
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
@@ -30,7 +30,7 @@ To have a quick check of the model:
|
|
30 |
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
|
31 |
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
|
32 |
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
|
33 |
-
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-roles')
|
34 |
ner = pipeline('ner', model, tokenizer=tokenizer)
|
35 |
res = ner(example)
|
36 |
for r in res:
|
@@ -43,7 +43,7 @@ The model must be used with the `roberta-base` tokenizer.
|
|
43 |
|
44 |
## Training data
|
45 |
|
46 |
-
The model was trained for token classification using the [EMBO/sd-
|
47 |
|
48 |
## Training procedure
|
49 |
|
@@ -53,7 +53,7 @@ Training code is available at https://github.com/source-data/soda-roberta
|
|
53 |
|
54 |
- Model fine-tuned: EMBL/bio-lm
|
55 |
- Tokenizer vocab size: 50265
|
56 |
-
- Training data: EMBO/sd-
|
57 |
- Dataset configuration: GENEPROD_ROLES
|
58 |
- Training with 48771 examples.
|
59 |
- Evaluating on 13801 examples.
|
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
+
- EMBO/sd-nlp
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `GENEPROD_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
|
|
30 |
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
|
31 |
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
|
32 |
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
|
33 |
+
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-geneprod-roles')
|
34 |
ner = pipeline('ner', model, tokenizer=tokenizer)
|
35 |
res = ner(example)
|
36 |
for r in res:
|
|
|
43 |
|
44 |
## Training data
|
45 |
|
46 |
+
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
|
47 |
|
48 |
## Training procedure
|
49 |
|
|
|
53 |
|
54 |
- Model fine-tuned: EMBL/bio-lm
|
55 |
- Tokenizer vocab size: 50265
|
56 |
+
- Training data: EMBO/sd-nlp
|
57 |
- Dataset configuration: GENEPROD_ROLES
|
58 |
- Training with 48771 examples.
|
59 |
- Evaluating on 13801 examples.
|