mbruton commited on
Commit
00c1b8d
1 Parent(s): 05ed593

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,7 +10,7 @@ library_name: transformers
10
  pipeline_tag: token-classification
11
  ---
12
 
13
- # Model Card for Model ID
14
 
15
  This model is fine-tuned on [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
16
 
@@ -18,7 +18,7 @@ This model is fine-tuned on [multilingual BERT](https://huggingface.co/bert-base
18
 
19
  ### Model Description
20
 
21
- Galician mBERT for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for low-resource Galician. This model is cased: it makes a difference between english and English. It was fine-tuned with the following objectives:
22
 
23
  - Identify up to 13 verbal roots within a sentence.
24
  - Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
 
10
  pipeline_tag: token-classification
11
  ---
12
 
13
+ # Model Card for GalBERT for Semantic Role Labeling (cased)
14
 
15
  This model is fine-tuned on [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
16
 
 
18
 
19
  ### Model Description
20
 
21
+ GalBERT for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for low-resource Galician. This model is cased: it makes a difference between english and English. It was fine-tuned with the following objectives:
22
 
23
  - Identify up to 13 verbal roots within a sentence.
24
  - Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.