Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Umbra-v3-MoE-4x11b-2ex - GGUF

Original model description:

license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - Himitsui/Kaiju-11B - Sao10K/Fimbulvetr-11B-v2 - decapoda-research/Antares-11b-v2 - beberik/Nyxene-v3-11B base_model: - Himitsui/Kaiju-11B - Sao10K/Fimbulvetr-11B-v2 - decapoda-research/Antares-11b-v2 - beberik/Nyxene-v3-11B

Umbra-v3-MoE-4x11b Data Card

Umbra-v3-MoE-4x11b-2ex

Creator: SteelSkull

About Umbra-v3-MoE-4x11b: A Mixture of Experts model designed for general assistance with a special knack for storytelling and RP/ERP

Integrates models from notable sources for enhanced performance in diverse tasks. this is the two expert version

Source Models:

Update-Log:

The [Umbra Series] keeps rolling out from the [Lumosia Series] garage, aiming to be your digital Alfred with a side of Shakespeare for those RP/ERP nights.

What's Fresh in v3?

Didn’t reinvent the wheel, just slapped on some fancier rims. Upgraded the models and tweaked the prompts a bit. Now, Umbra's not just a general use LLM; it's also focused on spinning stories and "Stories".

Negative Prompt Minimalism

Got the prompts to do a bit of a diet and gym routine—more beef on the positives, trimming down the negatives as usual with a dash of my midnight musings.

Still Guessing, Aren’t We?

Just so we're clear, "v3" is not the messiah of updates. It’s another experiment in the saga.

Dive into Umbra v3 and toss your two cents my way. Your feedback is the caffeine in my code marathon.

Downloads last month
97
GGUF
Model size
36.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .