Edit model card

BERTje: A Dutch BERT model

Wietse de Vries β€’ Andreas van Cranenburgh β€’ Arianna Bisazza β€’ Tommaso Caselli β€’ Gertjan van Noord β€’ Malvina Nissim

Model description

BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.

For details, check out our paper on arXiv, the code on Github and related work on Semantic Scholar.

The paper and Github page mention fine-tuned models that are available here.

How to use

from transformers import AutoTokenizer, AutoModel, TFAutoModel

tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased")
model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased")  # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased")  # Tensorflow

WARNING: The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the GroNLP/bert-base-dutch-cased tokenizer, use use the following tokenizer:

tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1")  # v1 is the old vocabulary

Benchmarks

The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.

More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.

All of the tested models are base sized (12) layers with cased tokenization.

Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.

Named Entity Recognition

Model CoNLL-2002 SoNaR-1 spaCy UD LassySmall
BERTje 90.24 84.93 86.10
mBERT 88.61 84.19 86.77
BERT-NL 85.05 80.45 81.62
RobBERT 84.72 81.98 79.84

Part-of-speech tagging

Model UDv2.5 LassySmall
BERTje 96.48
mBERT 96.20
BERT-NL 96.10
RobBERT 95.91

BibTeX entry and citation info

@misc{devries2019bertje,
\ttitle = {{BERTje}: {A} {Dutch} {BERT} {Model}},
\tshorttitle = {{BERTje}},
\tauthor = {de Vries, Wietse  and  van Cranenburgh, Andreas  and  Bisazza, Arianna  and  Caselli, Tommaso  and  Noord, Gertjan van  and  Nissim, Malvina},
\tyear = {2019},
\tmonth = dec,
\thowpublished = {arXiv:1912.09582},
\turl = {http://arxiv.org/abs/1912.09582},
}
Downloads last month
34,219
Safetensors
Model size
109M params
Tensor type
I64
Β·
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for GroNLP/bert-base-dutch-cased

Finetunes
9 models

Spaces using GroNLP/bert-base-dutch-cased 5