Edit model card

Munin-7b-alpha instruction fined tuned

Munin-7b-alpha from Danish Foundation Models fine-tuned by yours truly for 1 epoch on kobprof/skolegpt-instruct using the code from this notebook by The Alexandra Institute

Trained on a single Nvidia RTX A4000 GPU using 13.82 GB GPU memory (87.84%), of which 8.71 GB (55.39%) was used for LoRa.

The model trained for just shy of 4 hours consuming a total of 0.694 KWh (as per estimates produced with CodeCarbon) and emitting approximately 57 gCO2e (average CO2e emissions per KWh during training was 82.5 g as per https://www.energidataservice.dk/tso-electricity/CO2Emis)

Downloads last month
12
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ThatsGroes/munin-SkoleGPTOpenOrca-7b-16bit

Merges
1 model

Dataset used to train ThatsGroes/munin-SkoleGPTOpenOrca-7b-16bit