File size: 2,318 Bytes
146840d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: mit
datasets:
- allenai/c4
language:
- de
library_name: transformers
pipeline_tag: fill-mask
---

# BERTchen-v0.1

Efficiently pretrained [MosaicBERT](https://huggingface.co/mosaicml/mosaic-bert-base) model on German [C4](https://huggingface.co/datasets/allenai/c4).
Paper and Code following soon.

## Model description

BERTchen follows the architecture of a MosaicBERT model (introduced [in](https://arxiv.org/abs/2312.17482)) and utilizes [FlashAttention 2](https://arxiv.org/abs/2307.08691). It is pretrained for 4 hours on one A100 40GB GPU.

Only the masked language modeling objective is used, making the [CLS] token redundant, which is excluded from the tokenizer. As pretraining data, a random subset of the German C4 dataset (introduced [in](https://arxiv.org/abs/1910.10683)) is used.

The tokenizer is taken from other efficient German pretraining work: [paper](https://openreview.net/forum?id=VYfJaHeVod) and [code](https://github.com/konstantinjdobler/tight-budget-llm-adaptation)

## Training procedure
BERTchen was pretrained using the MosaicBERT hyper-parameters (Which can be found in the [paper](https://arxiv.org/abs/2312.17482) and [here](https://github.com/mosaicml/examples/blob/main/examples/benchmarks/bert/yamls/main/mosaic-bert-base-uncased.yaml)). We changed the training-goal to 2500 to better reflect the steps achievable by the model in the constrained time. In addition, we used a batch size of 1024, with a sequence length of 512 as we found this to work better. After 4 hours the training is cut and the checkpoint saved.

## Evaluation results
| Task | Germanquad (F1/EM) | Germeval 2017 B  | Germeval 2024 Subtask 1 as majority vote |
|:----:|:-----------:|:----:|:----:|
|      | 96.4/93.6   | 0.96 | 0.887 |

## Model variations
For the creation of BERTchen we tested different datasets and training setups. Two notable variants are:

- [`BERTchen-v0.1`](https://huggingface.co/frederic-sadrieh/BERTchen-v0.1) Same pre-training just on the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset.
- [`hybrid_BERTchen-v0.1`](https://huggingface.co/frederic-sadrieh/hybrid_BERTchen-v0.1) Pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) with own hybrid sequence length changing approach (For more information see model card or paper)