stefan-it's picture
readme: add initial version (#1)
9a04c55
|
raw
history blame
5.2 kB
metadata
language: fr
license: mit
tags:
  - flair
  - token-classification
  - sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
  - text: >-
      Le Moniteur universel fait ressortir les avantages de la situation de l '
      Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle
      peut avoir dans la question d ' Orient .

Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)

This Flair model was fine-tuned on the French NewsEye NER Dataset using hmByT5 as backbone LM.

The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found here.

The following NEs were annotated: PER, LOC, ORG and HumanProd.

⚠️ Inference Widget ⚠️

Fine-Tuning ByT5 models in Flair is currently done by implementing an own ByT5Embedding class.

This class needs to be present when running the model with Flair.

Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.

This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.

Results

We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:

  • Batch Sizes: [8, 4]
  • Learning Rates: [0.00015, 0.00016]

And report micro F1-score on development set:

Configuration Run 1 Run 2 Run 3 Run 4 Run 5 Avg.
bs4-e10-lr0.00016 0.793 0.803 0.8054 0.8069 0.8133 80.43 ± 0.66
bs8-e10-lr0.00016 0.7888 0.8094 0.8043 0.8011 0.8117 80.31 ± 0.8
bs4-e10-lr0.00015 0.7884 0.8109 0.8005 0.8083 0.8022 80.21 ± 0.78
bs8-e10-lr0.00015 0.788 0.8003 0.8067 0.8035 0.8064 80.1 ± 0.69

The training log and TensorBoard logs are also uploaded to the model hub.

More information about fine-tuning can be found here.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️