Edit model card

I have no idea what I’m doing… if this causes the apocalypse someone please let me know.

Luca-MN-bf16 8.0bpw h8 EXL2

Includes measurement.json file for further quantization

I’ll be taking a break until mid-2025 to recoup some funds, might still do small models occasionally, but anything big will have to wait. See you all next year! 😊

Original Model: https://huggingface.co/rAIfle/Luca-MN-bf16

Original Model Card

image/png

Luca-MN-bf16

This thing was just intended as an experiment but it turned out quite good. I had it both name and prompt imagegen for itself.

Created by running a high-r LoRA-pass over Nemo-Base with 2 epochs of some RP data, then a low-r pass with 0.5 epochs of the c2-data, then 3 epochs of DPO using jondurbin/gutenberg-dpo-v0.1.

Prompting

Use the Mistral V3-Tekken context- and instruct-templates. Temperature at about 1.25 seems to be the sweet spot, with either MinP at 0.05 or TopP at 0.9. DRY/Smoothing etc depending on your preference.

Quantized versions

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for FuturisticVibes/Luca-MN-bf16-8.0bpw-h8-exl2

Quantized
(11)
this model