aashish1904 commited on
Commit
c113a04
1 Parent(s): 874269a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ base_model:
7
+ - grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
8
+ tags:
9
+ - generated_from_trainer
10
+ datasets:
11
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
12
+ - anthracite-org/stheno-filtered-v1.1
13
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
14
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
15
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
16
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
17
+ - anthracite-org/nopm_claude_writing_fixed
18
+ - anthracite-org/kalo_opus_misc_240827
19
+ model-index:
20
+ - name: Epiculous/NovaSpark
21
+ results: []
22
+
23
+ ---
24
+
25
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
26
+
27
+
28
+ # QuantFactory/NovaSpark-GGUF
29
+ This is quantized version of [Epiculous/NovaSpark](https://huggingface.co/Epiculous/NovaSpark) created using llama.cpp
30
+
31
+ # Original Model Card
32
+
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/pnFt8anKzuycrmIuB-tew.png)
35
+
36
+ Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's [abliterated](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B) version of arcee's [SuperNova-lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite).
37
+ The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.
38
+
39
+ # Quants!
40
+ <strong>full</strong> / [exl2](https://huggingface.co/Epiculous/NovaSpark-exl2) / [gguf](https://huggingface.co/Epiculous/NovaSpark-GGUF)
41
+
42
+ ## Prompting
43
+ This model is trained on llama instruct template, the prompting structure goes a little something like this:
44
+
45
+ ```
46
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
47
+
48
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
49
+
50
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
51
+ ```
52
+
53
+ ### Context and Instruct
54
+ This model is trained on llama-instruct, please use that Context and Instruct template.
55
+
56
+ ### Current Top Sampler Settings
57
+ [Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
58
+ [Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
59
+ [Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
60
+ [Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>