Lewdiculous commited on
Commit
bfb2593
1 Parent(s): 025180c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -3
README.md CHANGED
@@ -1,3 +1,145 @@
1
- ---
2
- license: unlicense
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: unlicense
3
+ language:
4
+ - en
5
+ inference: false
6
+ tags:
7
+ - roleplay
8
+ - llama3
9
+ - sillytavern
10
+ ---
11
+
12
+ # #roleplay #sillytavern #llama3
13
+
14
+ My GGUF-IQ-Imatrix quants for [**NeverSleep/Lumimaid-v0.2-8B**](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B).
15
+
16
+ I recommend checking their page for feedback and support.
17
+
18
+ > [!IMPORTANT]
19
+ > **Quantization process:** <br>
20
+ > Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
21
+ > This is a bit more disk and compute intensive but hopefully avoids any losses during conversion. <br>
22
+ > To run this model, please use the [**latest version of KoboldCpp**](https://github.com/LostRuins/koboldcpp/releases/latest). <br>
23
+ > If you noticed any issues let me know in the discussions.
24
+
25
+ > [!NOTE]
26
+ > **General usage:** <br>
27
+ > For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** (4.89 BPW) quant for up to 12288 context sizes. <br>
28
+ >
29
+ > **Presets:** <br>
30
+ > Some compatible SillyTavern presets can be found [**here (Virt's Roleplay Presets)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
31
+ > Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) for other recommendations and samplers.
32
+
33
+ <details>
34
+ <summary>⇲ Click here to expand/hide information – General chart with relative quant parformances.</summary>
35
+
36
+ > [!NOTE]
37
+ > **Recommended read:** <br>
38
+ >
39
+ > [**"Which GGUF is right for me? (Opinionated)" by Artefact2**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
40
+ >
41
+ > *Click the image to view full size.*
42
+ > !["Which GGUF is right for me? (Opinionated)" by Artefact2 - Firs Graph](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fScWdHIPix5IzNJ8yswCB.webp)
43
+
44
+ </details>
45
+
46
+ > [!TIP]
47
+ > **Personal-support:** <br>
48
+ > I apologize for disrupting your experience. <br>
49
+ > Eventually I may be able to use a dedicated server for this, but for now hopefully these quants are helpful. <br>
50
+ > If you **want** and you are **able to**... <br>
51
+ > You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
52
+ >
53
+ > **Author-support:** <br>
54
+ > You can support the authors [**at their pages**](https://ko-fi.com/undiai)/[**here**](https://ikaridevgit.github.io/).
55
+
56
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qEH7KuSGfUGXSHeyWSwS-.png)
57
+
58
+ <details>
59
+ <summary>Original model card information.</summary>
60
+
61
+ ## **Original card:**
62
+
63
+ ## Lumimaid 0.2
64
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TUcHg7LKNjfo0sni88Ps7.png" alt="Image" style="display: block; margin-left: auto; margin-right: auto; width: 65%;">
65
+ <div style="text-align: center; font-size: 30px;">
66
+ <a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B">[8b]</a> -
67
+ <a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B">12b</a> -
68
+ <a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B">70b</a> -
69
+ <a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B">123b</a>
70
+ </div>
71
+
72
+ ### This model is based on: [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
73
+ Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
74
+
75
+ Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
76
+
77
+ As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
78
+
79
+ Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
80
+
81
+
82
+ ## Prompt template: Llama-3-Instruct
83
+
84
+ ```
85
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
86
+
87
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
88
+
89
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
90
+
91
+ {output}<|eot_id|>
92
+ ```
93
+
94
+ ## Credits:
95
+ - Undi
96
+ - IkariDev
97
+
98
+ ## Training data we used to make our dataset:
99
+
100
+ - [Epiculous/Gnosis](https://huggingface.co/Epiculous/Gnosis)
101
+ - [ChaoticNeutrals/Luminous_Opus](https://huggingface.co/datasets/ChaoticNeutrals/Luminous_Opus)
102
+ - [ChaoticNeutrals/Synthetic-Dark-RP](https://huggingface.co/datasets/ChaoticNeutrals/Synthetic-Dark-RP)
103
+ - [ChaoticNeutrals/Synthetic-RP](https://huggingface.co/datasets/ChaoticNeutrals/Synthetic-RP)
104
+ - [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
105
+ - [Gryphe/Opus-WritingPrompts](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
106
+ - [meseca/writing-opus-6k](https://huggingface.co/datasets/meseca/writing-opus-6k)
107
+ - [meseca/opus-instruct-9k](https://huggingface.co/datasets/meseca/opus-instruct-9k)
108
+ - [PJMixers/grimulkan_theory-of-mind-ShareGPT](https://huggingface.co/datasets/PJMixers/grimulkan_theory-of-mind-ShareGPT)
109
+ - [NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
110
+ - [Undi95/toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
111
+ - [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned)
112
+ - [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k)
113
+ - [Doctor-Shotgun/no-robots-sharegpt](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
114
+ - [Norquinal/claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k)
115
+ - [nothingiisreal/Claude-3-Opus-Instruct-15K](https://huggingface.co/datasets/nothingiisreal/Claude-3-Opus-Instruct-15K)
116
+ - All the Aesirs dataset, cleaned, unslopped
117
+ - All le luminae dataset, cleaned, unslopped
118
+ - Small part of Airoboros reduced
119
+
120
+ We sadly didn't find the sources of the following, DM us if you recognize your set !
121
+
122
+ - Opus_Instruct-v2-6.5K-Filtered-v2-sharegpt
123
+ - claude_sharegpt_trimmed
124
+ - CapybaraPure_Decontaminated-ShareGPT_reduced
125
+
126
+ ## Datasets credits:
127
+ - Epiculous
128
+ - ChaoticNeutrals
129
+ - Gryphe
130
+ - meseca
131
+ - PJMixers
132
+ - NobodyExistsOnTheInternet
133
+ - cgato
134
+ - kalomaze
135
+ - Doctor-Shotgun
136
+ - Norquinal
137
+ - nothingiisreal
138
+
139
+ ## Others
140
+
141
+ Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
142
+
143
+ IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
144
+
145
+ </details>