stefan-it's picture
readme: fix link reference for ByT5 embedding implementation
dcb2f91
metadata
language: de
license: mit
tags:
  - flair
  - token-classification
  - sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
  - text: >-
      Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und
      mehrern Truppen verließ , um in einer Central - Lage bey Sligo die
      Operationen der Armee persönlich zu dirigiren . Der Feind dürfte bald in
      die Enge kommen , da Gen . Lacke mit 6000 Mann ihm entgegen marschirt .

Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)

This Flair model was fine-tuned on the German HIPE-2020 NER Dataset using hmByT5 as backbone LM.

The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found here.

The following NEs were annotated: loc, org, pers, prod, time and comp.

⚠️ Inference Widget ⚠️

Fine-Tuning ByT5 models in Flair is currently done by implementing an own ByT5Embedding class.

This class needs to be present when running the model with Flair.

Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.

This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.

Results

We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:

  • Batch Sizes: [8, 4]
  • Learning Rates: [0.00015, 0.00016]

And report micro F1-score on development set:

Configuration Run 1 Run 2 Run 3 Run 4 Run 5 Avg.
bs4-e10-lr0.00016 0.7596 0.7466 0.7771 0.7894 0.7717 76.89 ± 1.47
bs8-e10-lr0.00015 0.7593 0.7663 0.7611 0.7647 0.7667 76.36 ± 0.29
bs8-e10-lr0.00016 0.7607 0.7736 0.7567 0.756 0.746 75.86 ± 0.89
bs4-e10-lr0.00015 0.7541 0.7466 0.7575 0.7579 0.7599 75.52 ± 0.47

The training log and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.

More information about fine-tuning can be found here.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️