DavidAU commited on
Commit
50a0210
1 Parent(s): 5ab4c2d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - creative
7
+ - story
8
+ - writing
9
+ - fiction
10
+ - float32
11
+ - roleplaying
12
+ - rp
13
+ - horror
14
+ - science fiction
15
+ - fiction writing
16
+ - scene generation
17
+ - scene continue
18
+ - brainstorm 5x
19
+ - brainstorm 10x
20
+ - enhanced
21
+ - space whale
22
+ - 32 bit upscale
23
+ pipeline_tag: text-generation
24
+ ---
25
+
26
+ <H3>BRAINSTORM - 4x - Multi 3x (ed3): L3-SthenoMaidBlackroot-8B-V1 </H3>
27
+
28
+ This repo contains quants 4x of L3-SthenoMaidBlackroot-8B-V1 using the "Brainstorm" method of augmenting reasoning in a LLM
29
+ to increase it's performance at the core level for ANY creative use case(s).
30
+
31
+ This version has 4 "reasoning" centers - one from the original merge, and 3 from the unmerged models (at close to full strength)
32
+ melded into a 4 layer reasoning center. Each of these reasoning centers is further split into 3 units and also calibrated for a total
33
+ of 12 "reasoning centers".
34
+
35
+ The BRAINSTORM process was developed by David_AU.
36
+
37
+ <B>What is "Brainstorm" ?</b>
38
+
39
+ The reasoning center of an LLM is taken apart, reassembled, and expanded by 8x.
40
+
41
+ Then these centers are individually calibrated. These "centers" also interact with each other. This introduces
42
+ subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The
43
+ number of centers (4x,5x,8x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
44
+
45
+ The "Multi" reasoning system pulls "reasoning centers" from multiple models and fuses these into one long "chain of reasoning"
46
+ so to speak. Each one is then calibrated. Each "center" interacts with the other "centers" and the order of the centers further
47
+ impacts the model's output style - again roughly speaking.
48
+
49
+ Each of these is further split, expanded and calibrated.
50
+
51
+ The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting
52
+ instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
53
+
54
+ Here are some of the enhancements this process brings to the model's performance:
55
+
56
+ - Prose generation seems more focused on the moment to moment.
57
+ - Sometimes there will be "preamble" and/or foreshadowing present.
58
+ - Fewer or no "cliches"
59
+ - Better overall prose and/or more complex / nuanced prose.
60
+ - A greater sense of nuance on all levels.
61
+ - Coherence is stronger.
62
+ - Description is more detailed, and connected closer to the content.
63
+ - Simile and Metaphors are stronger and better connected to the prose, story, and character.
64
+ - Sense of "there" / in the moment is enhanced.
65
+ - Details are more vivid, and there are more of them.
66
+ - Prose generation length can be long to extreme.
67
+ - Emotional engagement is stronger.
68
+ - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
69
+ - The MORE instructions and/or details you provide the more strongly the model will respond.
70
+ - Depending on the model "voice" may be more "human" vs original model's "voice".
71
+
72
+ Other "lab" observations:
73
+
74
+ - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
75
+ - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
76
+ - From lab testing it seems to ponder, and consider more carefully roughly speaking.
77
+ - You could say this process sharpens the model's focus on it's task(s) at a deeper level.
78
+
79
+ The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
80
+
81
+ Other technologies developed by David_AU like "Ultra" (precision), "Neo Imatrix" (custom imatrix datasets), and "X-quants" (custom application of the imatrix process)
82
+ can further enhance the performance of the model along with the "Brainstorm" process.
83
+
84
+ The "Brainstorm" process has been tested on multiple LLama2, Llama3, and Mistral models of various parameter sizes, as well as on
85
+ "root" models like "Llama3 Instruct", "Mistral Instruct", and "merged" / "fine tuned" models too.
86
+
87
+ <b>Usage Notice:</B>
88
+
89
+ You may need to raise the "repeat penalty" from a default of 1.1 to slightly higher levels in some use case(s).
90
+
91
+ <B>Original Model:</B>
92
+
93
+ For original model specifications, usage information and other important details please see:
94
+
95
+ [ https://huggingface.co/DavidAU/L3-8B-Stheno-v3.2-Ultra-NEO-V1-IMATRIX-GGUF ]
96
+
97
+ and the original model page:
98
+
99
+ Special thanks to the model creators at SAO10K for making such a fantastic model:
100
+
101
+ [ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2 ]
102
+
103
+ More to follow...