File size: 5,344 Bytes
0c58fc9 64296ad 0c58fc9 d1ed926 0c58fc9 d1ed926 64296ad 0c58fc9 d1ed926 0c58fc9 80064d1 d1ed926 0c58fc9 d1ed926 0c58fc9 d1ed926 0c58fc9 d1ed926 0c58fc9 64296ad |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
language:
- nl
license: apache-2.0
tags:
- generated_from_trainer
- GEITje
datasets:
- Rijgersberg/GEITje-pretrain-10b
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: GEITje-v1-7B
results: []
---
# GEITje-7B
GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B.
It has been further trained on 10 billion tokens of Dutch text.
This has improved its Dutch language skills and increased its knowledge of Dutch topics.
## Model description
### _Mistral_ – Base Model
GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/).
It's a large open language model with 7 billion parameters,
trained by [Mistral AI](https://mistral.ai).
According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on.
Mistral 7B has been released under the Apache 2.0 open source license.
### _GEITje_ – Trained Further on Dutch Texts
GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus.
It is a so-called _full-parameter finetune_:
performed on all parameters.
It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune.
Like Mistral, GEITje has a _context length_ of 8,192 tokens.
## More info
Read more about GEITje in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 953
- training_steps: 9536
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6995 | 0.02 | 199 | 1.7673 |
| 1.6949 | 0.04 | 398 | 1.6880 |
| 1.6377 | 0.06 | 597 | 1.6429 |
| 1.6011 | 0.08 | 796 | 1.6384 |
| 1.5196 | 0.1 | 995 | 1.6060 |
| 1.5158 | 0.13 | 1194 | 1.5832 |
| 1.5181 | 0.15 | 1393 | 1.5541 |
| 1.4931 | 0.17 | 1592 | 1.5493 |
| 1.4972 | 0.19 | 1791 | 1.5407 |
| 1.5349 | 0.21 | 1990 | 1.5305 |
| 1.5025 | 0.23 | 2189 | 1.5263 |
| 1.396 | 0.25 | 2388 | 1.5140 |
| 1.4353 | 0.27 | 2587 | 1.5104 |
| 1.4307 | 0.29 | 2786 | 1.5003 |
| 1.3974 | 0.31 | 2985 | 1.4849 |
| 1.404 | 0.33 | 3184 | 1.4771 |
| 1.4299 | 0.35 | 3383 | 1.4825 |
| 1.4342 | 0.38 | 3582 | 1.4705 |
| 1.4341 | 0.4 | 3781 | 1.4643 |
| 1.4535 | 0.42 | 3980 | 1.4580 |
| 1.4799 | 0.44 | 4179 | 1.4521 |
| 1.35 | 0.46 | 4378 | 1.4478 |
| 1.4586 | 0.48 | 4577 | 1.4425 |
| 1.3685 | 0.5 | 4776 | 1.4368 |
| 1.4572 | 0.52 | 4975 | 1.4313 |
| 1.3293 | 0.54 | 5174 | 1.4265 |
| 1.403 | 0.56 | 5373 | 1.4241 |
| 1.3057 | 0.58 | 5572 | 1.4188 |
| 1.244 | 0.61 | 5771 | 1.4178 |
| 1.3224 | 0.63 | 5970 | 1.4110 |
| 1.3238 | 0.65 | 6169 | 1.4083 |
| 1.3262 | 0.67 | 6368 | 1.4050 |
| 1.3237 | 0.69 | 6567 | 1.4027 |
| 1.0453 | 0.71 | 6766 | 1.4005 |
| 1.3136 | 0.73 | 6965 | 1.3992 |
| 1.3137 | 0.75 | 7164 | 1.3975 |
| 1.1587 | 0.77 | 7363 | 1.3964 |
| 1.316 | 0.79 | 7562 | 1.3957 |
| 1.2738 | 0.81 | 7761 | 1.3951 |
| 1.308 | 0.83 | 7960 | 1.3949 |
| 1.4049 | 0.86 | 8159 | 1.3946 |
| 1.3324 | 0.88 | 8358 | 1.3944 |
| 1.3446 | 0.9 | 8557 | 1.3944 |
| 1.2489 | 0.92 | 8756 | 1.3943 |
| 1.2687 | 0.94 | 8955 | 1.3943 |
| 1.3293 | 0.96 | 9154 | 1.3943 |
| 1.3045 | 0.98 | 9353 | 1.3943 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Rijgersberg__GEITje-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |50.53|
|AI2 Reasoning Challenge (25-Shot)|44.80|
|HellaSwag (10-Shot) |75.31|
|MMLU (5-Shot) |50.10|
|TruthfulQA (0-shot) |40.45|
|Winogrande (5-shot) |72.38|
|GSM8k (5-shot) |20.17|
|