tlemberger
commited on
Commit
•
9281402
1
Parent(s):
8a54381
update model card
Browse files
README.md
CHANGED
@@ -6,16 +6,16 @@ tags:
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
-
- EMBO/sd-
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
13 |
|
14 |
-
# sd-roles
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
@@ -47,13 +47,13 @@ The model was trained for token classification using the [EMBO/sd-panels dataset
|
|
47 |
|
48 |
## Training procedure
|
49 |
|
50 |
-
The training was run on
|
51 |
|
52 |
Training code is available at https://github.com/source-data/soda-roberta
|
53 |
|
54 |
- Model fine-tuned: EMBL/bio-lm
|
55 |
- Tokenizer vocab size: 50265
|
56 |
-
- Training data: EMBO/sd-
|
57 |
- Dataset configuration: GENEPROD_ROLES
|
58 |
- Training with 48771 examples.
|
59 |
- Evaluating on 13801 examples.
|
|
|
6 |
- token classification
|
7 |
license: agpl-3.0
|
8 |
datasets:
|
9 |
+
- EMBO/sd-panels
|
10 |
metrics:
|
11 |
-
|
12 |
---
|
13 |
|
14 |
+
# sd-geneprod-roles
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset with the `GENEPROD_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
|
19 |
|
20 |
|
21 |
## Intended uses & limitations
|
|
|
47 |
|
48 |
## Training procedure
|
49 |
|
50 |
+
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
|
51 |
|
52 |
Training code is available at https://github.com/source-data/soda-roberta
|
53 |
|
54 |
- Model fine-tuned: EMBL/bio-lm
|
55 |
- Tokenizer vocab size: 50265
|
56 |
+
- Training data: EMBO/sd-panels
|
57 |
- Dataset configuration: GENEPROD_ROLES
|
58 |
- Training with 48771 examples.
|
59 |
- Evaluating on 13801 examples.
|