File size: 1,046 Bytes
a509191
 
 
 
 
 
30afc26
 
a509191
 
 
 
 
 
 
 
 
534268b
 
a509191
 
cf43b54
 
 
 
c8ad2f8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
---
base_model:
- unsloth/Mistral-Nemo-Base-2407-bnb-4bit
library_name: transformers
tags:
- unsloth
- trl
- sft
license: apache-2.0
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/T_ITjuaHakgamjwuElcAs.png)

# Luca-MN-bf16

This thing was just intended as an experiment but it turned out quite good. I had it both name and prompt imagegen for itself.

Created by running a high-r LoRA-pass over Nemo-Base with 2 epochs of some RP data, then a low-r pass with 0.5 epochs of the c2-data, then 3 epochs of DPO using [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).

## Prompting

Use the `Mistral V3-Tekken` context- and instruct-templates. Temperature at about `1.25` seems to be the sweet spot, with either MinP at `0.05` or TopP at `0.9`. DRY/Smoothing etc depending on your preference. 

## Quantized versions

- [iMat GGUFs](https://huggingface.co/Quant-Cartel/Luca-MN-iMat-GGUF), courtesy of the [Quant-Cartel](https://huggingface.co/Quant-Cartel/)