Edit model card

GovLLM Ultra banner

GovLLM-7B-ultra

A question answering model about the Dutch Government.

Model description

This model is a fine-tuned version of the Dutch conversational model BramVanroy/GEITje-7B-ULTRA on a Dutch question-answer pair dataset of the Dutch Government. This is a Dutch question/answer model ultimately based on Mistral and fine-tuned with SFT and LoRA. The training with 3 epochs took almost 2 hours and was run on an Nvidia A100 (40GB VRAM).

Usage with Inference Endpoints (Dedicated)

import requests

API_URL = "https://your-own-endpoint.us-east-1.aws.endpoints.huggingface.cloud"
headers = {"Authorization": "Bearer hf_your_own_token"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

output = query({
    "inputs": "Geeft de overheid subsidie aan bedrijven?"
})

# print generated answer
print(output[0]['generated_text'])

Training hyperparameters

The following hyperparameters were used during training:

  • block_size: 1024,
  • model_max_length: 2048,
  • padding: right,
  • mixed_precision: fp16,
  • learning rate (lr): 0.00003,
  • epochs: 3,
  • batch_size: 2,
  • optimizer: adamw_torch,
  • schedular: linear,
  • quantization: int8,
  • peft: true,
  • lora_r: 16,
  • lora_alpha: 16,
  • lora_dropout: 0.05

Training results

Epoch Loss Grad_norm learning_rate step
0.14 1.3183 0.6038 1.3888e-05 25/540
0.42 1.0220 0.4180 2.8765e-05 75/540
0.69 0.9251 0.4119 2.56793-05 125/540
0.97 0.9260 0.4682 2.2592e-05 175/540
1.25 0.8586 0.5338 1.9506e-05 225/540
1.53 0.8767 0.6359 1.6420e-05 275/540
1.80 0.8721 0.6137 1.3333e-05 325/540
2.08 0.8469 0.7310 1.0247e-05 375/540
2.36 0.8324 0.7945 7.1605e-05 425/540
2.64 0.8170 0.8522 4.0741e-05 475/540
2.91 0.8185 0.8562 9.8765e-05 525/540
Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Nelis5174473/GovLLM-7B-ultra