README / README.md
stefan-it's picture
readme: mention hmLeaderboard
9bb7280
---
title: README
emoji: 📚
colorFrom: indigo
colorTo: purple
sdk: static
pinned: false
---
# hmByT5 - Preliminary Language Models
Preliminary Historical Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
* English (British Library Corpus - Books)
* German (Europeana Newspaper)
* French (Europeana Newspaper)
* Finnish (Europeana Newspaper)
* Swedish (Europeana Newspaper)
* Dutch (Delpher Corpus)
More details can be found in [our GitHub repository](https://github.com/stefan-it/hmByT5).
# Leaderboard
We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana. The following table
shows an overview of used datasets.
| Language | Dataset | Additional Dataset |
|----------|--------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|
| English | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | - |
| German | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | - |
| French | [AjMC](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md) | [ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) |
| Finnish | [NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) | - |
| Swedish | [NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md) | - |
| Dutch | [ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar) | - |
Current best models:
| Model | English AjMC | German AjMC | French AjMC | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | Avg. |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|--------------|-----------------|-----------------|--------------|--------------|------|
| [`hmbyt5/byt5-small-english`](https://huggingface.co/hmbyt5/byt5-small-english) | 85.65 ± 1.21 | 87.27 ± 0.50 | 84.44 ± 0.79 | | | | | |
| [`hmbyt5-preliminary/byt5-small-english-german`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german) | 85.74 ± 0.72 | 87.45 ± 0.67 | 84.23 ± 0.65 | | | | | |
| [`hmbyt5-preliminary/byt5-small-english-german-french`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german-french) | 85.61 ± 0.96 | 87.24 ± 0.76 | 84.39 ± 0.68 | | | | | |
| [`hmbyt5-preliminary/byt5-small-english-german-french-finnish`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german-french-finnish) | 85.30 ± 1.14 | 87.37 ± 0.53 | 84.12 ± 0.42 | | | | | |
| [`hmbyt5-preliminary/byt5-small-english-german-french-finnish-swedish`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german-french-finnish-swedish) | 85.40 ± 0.78 | 87.12 ± 0.19 | 84.41 ± 0.34 | | | | | |
| [`hmbyt5-preliminary/byt5-small-english-german-french-finnish-swedish-dutch`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german-french-finnish-swedish-dutch) | 85.51 ± 0.68 | 87.58 ± 0.39 | 84.39 ± 0.83 | 55.46 ± 1.99 | 73.38 ± 2.45 | 84.80 ± 0.44 | 75.97 ± 0.55 | |
| [`hmbyt5-preliminary/byt5-small-multilingual-4g`](https://huggingface.co/hmbyt5-preliminary/byt5-small-multilingual-4g) | 83.49 ± 0.96 | 87.65 ± 0.63 | 84.16 ± 0.90 | | | | | |
| [`hmbyt5-preliminary/byt5-small-multilingual-4g-2e`](https://huggingface.co/hmbyt5-preliminary/byt5-small-multilingual-4g-2e) | 83.86 ± 0.61 | 87.54 ± 0.19 | 84.29 ± 0.41 | | | | | |
| [`hmbyt5-preliminary/byt5-small-multilingual-4g-3e`](https://huggingface.co/hmbyt5-preliminary/byt5-small-multilingual-4g-3e) | 83.49 ± 0.99 | 87.38 ± 0.53 | 84.30 ± 0.51 | | | | | |
| [`hmbyt5-preliminary/byt5-small-historic-multilingual-flax`](https://huggingface.co/hmbyt5-preliminary/byt5-small-historic-multilingual-flax) | 83.28 ± 1.67 | 86.98 ± 0.71 | 83.49 ± 1.06 | 76.96 ± 1.58 | 78.80 ± 1.89 | 86.47 ± 0.79 | 77.43 ± 0.51 | |
| [`hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax`](https://huggingface.co/hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax) | 84.91 ± 0.86 | 88.02 ± 0.35 | 84.78 ± 0.75 | 77.77 ± 1.83 | 79.94 ± 0.60 | 86.85 ± 0.91 | 77.45 ± 0.54 | |
More recent results on more datasets can be found in the [`hmLeaderboard`](https://huggingface.co/spaces/hmbench/hmLeaderboard).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historical Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️