jamesburton commited on
Commit
fadb488
1 Parent(s): 9f77b1b

Added GGUF generation script and configuration, please brief note

Browse files
Files changed (2) hide show
  1. README.md +5 -1
  2. imatrix/imatrix.txt +0 -0
README.md CHANGED
@@ -4,4 +4,8 @@ This is a GGUF version of https://huggingface.co/PhilipMay/Phi-3-mini-4k-instruc
4
 
5
  The source model is an 8x MoE version of microsoft/Phi-3-mini-4k-instruct. It is based on the Llamafied version vonjack/Phi-3-mini-4k-instruct-LLaMAfied of Gan Feng.
6
 
7
- It was created with the help of mergekit.
 
 
 
 
 
4
 
5
  The source model is an 8x MoE version of microsoft/Phi-3-mini-4k-instruct. It is based on the Llamafied version vonjack/Phi-3-mini-4k-instruct-LLaMAfied of Gan Feng.
6
 
7
+ It was created with the help of mergekit.
8
+
9
+ I have included the gguf-imat.py script and imatrix\imatrix.txt configuration used for the conversion. This is based on FantasiaFoundry/GGUF-Quantization-Script, and tweaked to pad vocab to allow operation with this model.
10
+
11
+ This model has been tested to be functional with LlamaSharp, so should be compatible with any llama.cpp based solutions.
imatrix/imatrix.txt ADDED
The diff for this file is too large to render. See raw diff