Model Overview

NB-Llama-3.1-8B-Vision-Instruct is part of the NB-Llama-3.1 series of models, trained on top of Llama-3.1-8B-Instruct. This multilingual generative model was fine-tuned specifically to support Norwegian Bokmål, Norwegian Nynorsk, and English, with partial support for Swedish and Danish.

The Instruct-model is trained using Supervised FineTuning (SFT) and then Direct Preference Optimalisation (DPO). The SFT training is based on synthetic datasets, the English Magpie dataset and a translated/filtered version of this dataset. The DPO training is based on Anthropics Helpful and Harmless dataset. The training is supposed to be fairly basic, giving the models a decent undertstanding of a chat template.

The basic idea with this model series was to explore how current state-of-the-art models could be improved for Norwegian by training only on publicly available data. While these models are trained by the National Library of Norway, they do not include data only available through legal deposit. They do, however, contain public data like governmental reports that are both publicly available and legally deposited.

The Vision-Instruct model is highly experimental. It is using the Vision part of the meta-llama/Llama-3.2-8B-Vision-Instruct merged with the NB-version of the Llama-3.1 model.

Why do we release this model?

We are releasing this model because Unsloth did release finetuning code for Llama-vision models. Please play around with it, and tell us the result.

Downloads last month
58
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Collection including NbAiLab/nb-llama-3.1-11B-Vision-Instruct