language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: >-
— Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις
vorherging , das Stück Σαλαμίνιαι folgte .
Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the AjMC German NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the Ajax MultiCommentary project.
The following NEs were annotated: pers
, work
, loc
, object
, date
and scope
.
⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own ByT5Embedding
class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
- Batch Sizes:
[8, 4]
- Learning Rates:
[0.00015, 0.00016]
And report micro F1-score on development set:
Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
---|---|---|---|---|---|---|
bs4-e10-lr0.00016 | 0.8892 | 0.8913 | 0.8867 | 0.8843 | 0.8828 | 88.69 ± 0.31 |
bs4-e10-lr0.00015 | 0.8786 | 0.8793 | 0.883 | 0.8807 | 0.8722 | 87.88 ± 0.36 |
bs8-e10-lr0.00016 | 0.8602 | 0.8684 | 0.8643 | 0.8643 | 0.8623 | 86.39 ± 0.27 |
bs8-e10-lr0.00015 | 0.8551 | 0.8707 | 0.8599 | 0.8609 | 0.8612 | 86.16 ± 0.51 |
The training log and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found here.
Acknowledgements
We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️