munish0838 commited on
Commit
c9eb5a7
1 Parent(s): f06f567

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: mit
5
+ license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
6
+
7
+ language:
8
+ - multilingual
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - nlp
12
+ - code
13
+ inference:
14
+ parameters:
15
+ temperature: 0.7
16
+ widget:
17
+ - messages:
18
+ - role: user
19
+ content: What's the difference between a banana and a strawberry?
20
+
21
+ ---
22
+
23
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
24
+
25
+ # QuantFactory/Phi-3-mini-4k-geminified-GGUF
26
+ This is quantized version of [failspy/Phi-3-mini-4k-geminified](https://huggingface.co/failspy/Phi-3-mini-4k-geminified) created using llama.cpp
27
+
28
+ # Original Model Card
29
+
30
+
31
+ # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified
32
+
33
+ Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/)
34
+
35
+ [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
36
+
37
+ ## What's this?
38
+ Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series.
39
+
40
+ ## Summary
41
+
42
+ This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
43
+
44
+ This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.
45
+