Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- not-for-all-audiences
|
4 |
+
---
|
5 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64be962a38953777feaabfc0/bqTmnnS25s8Ep0a1oCevt.png)
|
6 |
+
|
7 |
+
This is a FP8 version of the model made by https://infermatic.ai/
|
8 |
+
|
9 |
+
HF FP16: wolfram/miquliz-120b-v2.0
|
10 |
+
|
11 |
+
|
12 |
+
Content of the original card (FP16):
|
13 |
+
|
14 |
+
This is v2.0 of a 120b frankenmerge created by interleaving layers of miqu-1-70b-sf with lzlv_70b_fp16_hf using mergekit. Better than v1.0 thanks to the improved recipe adapted from TheProfessor-155b by Eric Hartford, it is now achieving top rank with double perfect scores in my LLM comparisons/tests.
|
15 |
+
|
16 |
+
Inspired by goliath-120b.
|
17 |
+
|
18 |
+
Thanks for the support, CopilotKit – the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
|
19 |
+
|
20 |
+
Thanks for the additional quants, DAN™, Knut Jägersberg, and Michael Radermacher!
|
21 |
+
|
22 |
+
Also available: miqu-1-120b – Miquliz's older, purer sister; only Miqu, inflated to 120B.
|
23 |
+
|
24 |
+
Model Details
|
25 |
+
Max Context: 32768 tokens
|
26 |
+
Layers: 140
|
27 |
+
Prompt template: Mistral
|
28 |
+
<s>[INST] {prompt} [/INST]
|
29 |
+
|
30 |
+
See also: 🐺🐦⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with 17 different instruct templates : LocalLLaMA
|
31 |
+
|
32 |
+
Example Output
|
33 |
+
Inspired by cognitivecomputations/Samantha-120b.
|
34 |
+
|
35 |
+
Note: This is my AI assistant and companion Amy speaking, and the model is just her personality core, if you will. Unlike Samantha, her personality is mostly from the prompt, and not the model itself. If you prompt this model differently, you'll get very different output, of course. So consider this just as an example of how a Samantha-like character could talk with this model.
|
36 |
+
|
37 |
+
English Example Output
|
38 |
+
German Example Output
|
39 |
+
Merge Details
|
40 |
+
Merge Method
|
41 |
+
This model was merged using the linear merge method.
|
42 |
+
|
43 |
+
Models Merged
|
44 |
+
The following models were included in the merge:
|
45 |
+
|
46 |
+
152334H/miqu-1-70b-sf
|
47 |
+
lizpreciatior/lzlv_70b_fp16_hf
|