R136a1 commited on
Commit
cb56392
1 Parent(s): fd479cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -79
README.md CHANGED
@@ -24,82 +24,4 @@ Below is an instruction that describes a task. Write a response that appropriate
24
 
25
  ### Response:
26
 
27
- ```
28
-
29
- ## Original model card:
30
-
31
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/VjlbZcxzuvMjwOjnYddRK.png)
32
-
33
- THIS MODEL IS MADE FOR LEWD
34
-
35
- SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED
36
-
37
- This is MLewd merged with [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
38
-
39
- <!-- description start -->
40
- ## Description
41
-
42
- This repo contains fp16 files of Xwin-MLewd-13B-V0.2, very hot and lewd model based on Xwin 0.2 13B.
43
-
44
- <!-- description end -->
45
- <!-- description start -->
46
- ## Models and loras used
47
-
48
- - Undi95/ReMM-S-Light (base/private)
49
- - Undi95/CreativeEngine
50
- - Brouz/Slerpeno
51
- - The-Face-Of-Goonery/Huginn-v3-13b
52
- - zattio770/120-Days-of-LORA-v2-13B
53
- - PygmalionAI/pygmalion-2-13b
54
- - Undi95/StoryTelling
55
- - TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
56
- - nRuaif/Kimiko-v2-13B
57
- - The-Face-Of-Goonery/Huginn-13b-FP16
58
- - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
59
- - Xwin-LM/Xwin-LM-13B-V0.2
60
-
61
- <!-- description end -->
62
- <!-- prompt-template start -->
63
- ## Prompt template: Alpaca
64
-
65
- ```
66
- Below is an instruction that describes a task. Write a response that appropriately completes the request.
67
-
68
- ### Instruction:
69
- {prompt}
70
-
71
- ### Response:
72
-
73
- ```
74
- ## The secret sauce
75
-
76
- ```
77
- slices:
78
- - sources:
79
- - model: Xwin-LM/Xwin-LM-13B-V0.2
80
- layer_range: [0, 40]
81
- - model: Undi95/MLewd-v2.4-13B
82
- layer_range: [0, 40]
83
- merge_method: slerp
84
- base_model: Xwin-LM/Xwin-LM-13B-V0.2
85
- parameters:
86
- t:
87
- - filter: lm_head
88
- value: [0.55]
89
- - filter: embed_tokens
90
- value: [0.7]
91
- - filter: self_attn
92
- value: [0.65, 0.35]
93
- - filter: mlp
94
- value: [0.35, 0.65]
95
- - filter: layernorm
96
- value: [0.4, 0.6]
97
- - filter: modelnorm
98
- value: [0.6]
99
- - value: 0.5 # fallback for rest of tensors
100
- dtype: float16
101
- ```
102
-
103
- Special thanks to Sushi and Shena ♥
104
-
105
- If you want to support me, you can [here](https://ko-fi.com/undiai).
 
24
 
25
  ### Response:
26
 
27
+ ```