--- language: fr license: mit tags: - flair - token-classification - sequence-tagger-model base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased widget: - text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi , 719 , 826 , 4496 . --- # Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022) This Flair model was fine-tuned on the [AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) NER Dataset using hmBERT 64k as backbone LM. The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics, and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/) project. The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`. # Results We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration: * Batch Sizes: `[4, 8]` * Learning Rates: `[3e-05, 5e-05]` And report micro F1-score on development set: | Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average | |-------------------|--------------|--------------|-----------------|--------------|--------------|-----------------| | `bs4-e10-lr3e-05` | [0.8586][1] | [0.8586][2] | [**0.8688**][3] | [0.8539][4] | [0.8529][5] | 0.8586 ± 0.0063 | | `bs8-e10-lr5e-05` | [0.8539][6] | [0.8653][7] | [0.8518][8] | [0.8536][9] | [0.8374][10] | 0.8524 ± 0.0099 | | `bs8-e10-lr3e-05` | [0.8486][11] | [0.8486][12] | [0.8522][13] | [0.8512][14] | [0.8414][15] | 0.8484 ± 0.0042 | | `bs4-e10-lr5e-05` | [0.8529][16] | [0.8425][17] | [0.8501][18] | [0.8412][19] | [0.8501][20] | 0.8474 ± 0.0052 | [1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 [11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1 [12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2 [13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3 [14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4 [15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 [16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1 [17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 [18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3 [19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4 [20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub. More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench). # Acknowledgements We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and [Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models. Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). Many Thanks for providing access to the TPUs ❤️