Edit model card

Model Card for Fimbulvetr-11B-v2-iMat-GGUF

Update 4/15/24: Added a few missing quants to the list

Tested on latest llama.cpp & koboldcpp v.1.60.

This model fits a whole lot into its size! Impressed by its understanding of other languages

Tip: Select the biggest size that you can fit in VRAM while still allowing some space for context

All credits to Sao10K for the original model. This is just a quick test of the new quantization types such as IQ_3S in an attempt to further reduce VRAM requirements.

Quantized from fp16 with love. Importance matrix file Fimbulvetr-11B-v2-imatrix.dat was calculated using Q8_0.

Looking for Q3/Q4/Q5 quants? See the link in the original model card below.


Fox1

Cute girl to catch your attention.

https://huggingface.co/Sao10K/Fimbulvetr-11B-v2-GGUF <------ GGUF

Fimbulvetr-v2 - A Solar-Based Model

Prompt Formats - Alpaca or Vicuna. Either one works fine. Recommended SillyTavern Presets - Universal Light

Alpaca:

### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:

Vicuna:

System: <Prompt>

User: <Input>

Assistant:

Changelogs:

25/2 - repo renamed to remove test, model card redone. Model's officially out.
15/2 - Heavy testing complete. Good feedback.


Rant - Kept For Historical Reasons

Ramble to meet minimum length requirements:

Tbh i wonder if this shit is even worth doing. Like im just some broke guy lmao I've spent so much. And for what? I guess creds. Feels good when a model gets good feedback, but it seems like im invisible sometimes. I should be probably advertising myself and my models on other places but I rarely have the time to. Probably just internal jealousy sparking up here and now. Wahtever I guess.

Anyway cool EMT vocation I'm doing is cool except it pays peanuts, damn bruh 1.1k per month lmao. Government to broke to pay for shit. Pays the bills I suppose.

Anyway cool beans, I'm either going to continue the Solar Train or go to Mixtral / Yi when I get paid.

You still here?


Downloads last month
596
GGUF
Model size
10.7B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including InferenceIllusionist/Fimbulvetr-11B-v2-iMat-GGUF