---
tags:
- not-for-all-audiences
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64be962a38953777feaabfc0/bqTmnnS25s8Ep0a1oCevt.png)

This is a FP8 version of the model made by https://infermatic.ai/

HF FP16: wolfram/miquliz-120b-v2.0


Content of the original card (FP16):

This is v2.0 of a 120b frankenmerge created by interleaving layers of miqu-1-70b-sf with lzlv_70b_fp16_hf using mergekit. Better than v1.0 thanks to the improved recipe adapted from TheProfessor-155b by Eric Hartford, it is now achieving top rank with double perfect scores in my LLM comparisons/tests.

Inspired by goliath-120b.

Thanks for the support, CopilotKit – the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.

Thanks for the additional quants, DAN™, Knut Jägersberg, and Michael Radermacher!

Also available: miqu-1-120b – Miquliz's older, purer sister; only Miqu, inflated to 120B.

Model Details
Max Context: 32768 tokens
Layers: 140
Prompt template: Mistral
<s>[INST] {prompt} [/INST]

See also: 🐺🐦‍⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with 17 different instruct templates : LocalLLaMA

Example Output
Inspired by cognitivecomputations/Samantha-120b.

Note: This is my AI assistant and companion Amy speaking, and the model is just her personality core, if you will. Unlike Samantha, her personality is mostly from the prompt, and not the model itself. If you prompt this model differently, you'll get very different output, of course. So consider this just as an example of how a Samantha-like character could talk with this model.

English Example Output
German Example Output
Merge Details
Merge Method
This model was merged using the linear merge method.

Models Merged
The following models were included in the merge:

152334H/miqu-1-70b-sf
lizpreciatior/lzlv_70b_fp16_hf