tlemberger
commited on
Commit
·
8a54381
1
Parent(s):
c615d3a
typos
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ tags:
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
-
- EMBO/sd-
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
@@ -15,7 +15,7 @@ metrics:
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It as then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
@@ -51,8 +51,10 @@ The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
|
|
51 |
|
52 |
Training code is available at https://github.com/source-data/soda-roberta
|
53 |
|
|
|
54 |
- Tokenizer vocab size: 50265
|
55 |
-
- Training data: EMBO/
|
|
|
56 |
- Training with 48771 examples.
|
57 |
- Evaluating on 13801 examples.
|
58 |
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
|
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
+
- EMBO/sd-nlp
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It as then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `GENEPROD_ROLS` configuration to perform pure context-dependent semantic role classification of bioentities.
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
|
|
51 |
|
52 |
Training code is available at https://github.com/source-data/soda-roberta
|
53 |
|
54 |
+
- Model fine-tuned: EMBL/bio-lm
|
55 |
- Tokenizer vocab size: 50265
|
56 |
+
- Training data: EMBO/sd-nlp
|
57 |
+
- Dataset configuration: GENEPROD_ROLES
|
58 |
- Training with 48771 examples.
|
59 |
- Evaluating on 13801 examples.
|
60 |
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
|