Edit model card

Model Card for Bellman

This version of bellman is finetuned from Mistral-Nemo-Instruct-2407. It's a rank 128 qlora trained for about 1 epoch. It's finetuned for prompt question answering, based on a dataset created from Swedish wikipedia, with a lot of Sweden-centric questions. New from previous versions is questions from a translated code-feedback dataset, as well as a number of stories.

Consider this a work in progress as I adjust the training for this new model size. Will provide a few updates to the model.

Quality seems to be dependent on "initial seed". Sometimes it's just amazing, and sometimes quite bad. If you're having trouble getting good results, try lowering temperature.

For GGUFs, please look to:
https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-GGUF and
https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-i1-GGUF

image/png

ko-fi

Model Details

Rank: 128

Trained with Unsloth on a 3090

Differences from base model:

The base model is pretty good at Swedish already, but my 'vibe check' says this finetune is slightly improved. There are less weird wordings.

Bellman is trained on fairly short answers and tends to be less verbose.

Output example (Mistral-Nemo-Instruct-bellman-12b.i1-Q4_K_M.gguf):

User: "Hej!"

Bellman: "Hej! Vad kan jag hjälpa dig med?"

User: "Berätta lite om Sverige"

Bellman: "Sverige är ett land i norra Europa med en rik historia. Det består av 21 län och har en yta på drygt 450,000 kvadratkilometer. Stockholm är Sveriges huvudstad."

User: "När grundades det?"

Bellman: "Det är inte exakt känt när Sverige grundades. Det var först under 1300-talet att Sverige började ge sig uttryck i internationella affärer."

User: "Vad hände 1523?"

Bellman: "År 1523 blev Gustav Vasa kung över Sverige och inledde reformationen i landet. Han var också den förste monarken att bära titeln "Sveriges Konung"."

User: "Hur länge levde han?"

Bellman: "Gustav Vasa levde från år 1496 till 1560."

Training Parameters

per_device_train_batch_size = 2,
gradient_accumulation_steps = 64,
num_train_epochs=3,
warmup_steps = 5,
learning_rate = 1e-4,
logging_steps = 15,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
per_device_eval_batch_size = 2,
evaluation_strategy="steps",
eval_accumulation_steps = 64,
eval_steps = 15,
eval_delay = 0,
save_strategy="steps",
save_steps=50,

Model Description

  • Developed by: Me
  • Funded by: Me
  • Model type: Instruct
  • Language(s) (NLP): Swedish
  • License: Apache 2 License
  • Finetuned from model: Mistral-Nemo-Instruct-2407

Model Card Contact

rickard@mindemia.com

Downloads last month
52
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for neph1/Mistral-Nemo-Instruct-bellman-12b

Finetuned
(33)
this model
Quantizations
5 models

Collection including neph1/Mistral-Nemo-Instruct-bellman-12b