Edit model card

Model Card for Bellman

This version of bellman is finetuned from llama-3-instruct-8b. It is arguable whether it's better at Swedish, because llama-3 is really good. It's however finetuned for prompt question answering, based on a dataset created from Swedish wikipedia, with a lot of Sweden-centric questions. New from previous versions is questions from a translated code-feedback dataset, as well as a number of stories. It's not great at generating stories, but better than previosly.

Please note, the HuggingFace inference api is probably trying to load the adapter (lora) which isn't going to work.

240609: I've uploaded a 4-bit GPTQ quant, but it's completely untested.

image/png

Model Details

Training run on 240606:

Step Training Loss Validation Loss
25 1.506400 1.164538
50 1.128500 1.059316
75 1.095100 1.040511
100 1.068700 1.031033
125 1.061300 1.024377
150 1.035700 1.017490
175 1.061200 1.012095
200 1.031600 1.007867
225 1.031900 1.002652
250 0.958300 1.003817
275 0.967900 1.000483
300 0.950000 0.998807
325 0.974300 0.996894
350 0.960700 0.994098
375 0.956000 0.991491
400 0.940500 0.988697
425 0.949100 0.987253
450 0.940600 0.986425 <-- Picked checkpoint
475 0.888300 0.994204
500 0.881700 0.994897

Model Description

  • Developed by: Me
  • Funded by: Me
  • Model type: Instruct
  • Language(s) (NLP): Swedish
  • License: llama-3
  • Finetuned from model: Llama3 Instruct 8b

Model Card Contact

rickard@mindemia.com

Downloads last month
70
GGUF
Model size
8.03B params
Architecture
llama

3-bit

6-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including neph1/llama-3-instruct-bellman-8b-swedish