Edit model card

Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF

This model was converted to GGUF format from NeverSleep/Lumimaid-v0.2-12B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

This model is based on: Mistral-Nemo-Instruct-2407

Wandb: https://wandb.ai/undis95/Lumi-Mistral-Nemo?nw=nwuserundis95

NOTE: As explained on Mistral-Nemo-Instruct-2407 repo, it's recommended to use a low temperature, please experiment!

Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.

As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.

Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back! Prompt template: Mistral

[INST] {input} [/INST] {output}

Credits:

Undi
IkariDev

Training data we used to make our dataset:

Epiculous/Gnosis
ChaoticNeutrals/Luminous_Opus
ChaoticNeutrals/Synthetic-Dark-RP
ChaoticNeutrals/Synthetic-RP
Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
Gryphe/Opus-WritingPrompts
meseca/writing-opus-6k
meseca/opus-instruct-9k
PJMixers/grimulkan_theory-of-mind-ShareGPT
NobodyExistsOnTheInternet/ToxicQAFinal
Undi95/toxic-dpo-v0.1-sharegpt
cgato/SlimOrcaDedupCleaned
kalomaze/Opus_Instruct_25k
Doctor-Shotgun/no-robots-sharegpt
Norquinal/claude_multiround_chat_30k
nothingiisreal/Claude-3-Opus-Instruct-15K
All the Aesirs dataset, cleaned, unslopped
All le luminae dataset, cleaned, unslopped
Small part of Airoboros reduced

We sadly didn't find the sources of the following, DM us if you recognize your set !

Opus_Instruct-v2-6.5K-Filtered-v2-sharegpt
claude_sharegpt_trimmed
CapybaraPure_Decontaminated-ShareGPT_reduced

Datasets credits:

Epiculous
ChaoticNeutrals
Gryphe
meseca
PJMixers
NobodyExistsOnTheInternet
cgato
kalomaze
Doctor-Shotgun
Norquinal
nothingiisreal

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF --hf-file lumimaid-v0.2-12b-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF --hf-file lumimaid-v0.2-12b-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF --hf-file lumimaid-v0.2-12b-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF --hf-file lumimaid-v0.2-12b-q8_0.gguf -c 2048
Downloads last month
25
GGUF
Model size
12.2B params
Architecture
llama

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF

Quantized
(19)
this model

Collections including Triangle104/Lumimaid-v0.2-12B-Q8_0-GGUF