Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: microsoft/phi-2
|
3 |
+
inference: false
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
license: other
|
7 |
+
license_link: https://huggingface.co/microsoft/phi-3/resolve/main/LICENSE
|
8 |
+
license_name: microsoft-research-license
|
9 |
+
model_creator: Microsoft
|
10 |
+
model_name: Phi 3
|
11 |
+
model_type: phi-msft
|
12 |
+
pipeline_tag: text-generation
|
13 |
+
prompt_template: 'Instruct: {prompt}
|
14 |
+
|
15 |
+
Output:
|
16 |
+
|
17 |
+
'
|
18 |
+
quantized_by: HDKLK
|
19 |
+
tags:
|
20 |
+
- nlp
|
21 |
+
- code
|
22 |
+
---
|
23 |
+
|
24 |
+
|
25 |
+
## Provided files
|
26 |
+
|
27 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
28 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
29 |
+
| [Phi-3-mini-128k-instructQ2_K.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ2_K.gguf) | Q2_K | 2 | 1.42 GB| 3.67 GB | smallest, significant quality loss - not recommended for most purposes |
|
30 |
+
| [Phi-3-mini-128k-instructQ3_K_L.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_L.gguf) | Q3_K_L | 3 | 2.09 GB| 4.10 GB | small, substantial quality loss |
|
31 |
+
| [Phi-3-mini-128k-instructQ3_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_M.gguf) | Q3_K_M | 3 | 1.96 GB| 3.98 GB | very small, high quality loss |
|
32 |
+
| [Phi-3-mini-128k-instructQ3_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_S.gguf) | Q3_K_S | 3 | 1.68 GB| 3.75 GB | very small, high quality loss |
|
33 |
+
| [Phi-3-mini-128k-instructQ4_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB| 4.29 GB | medium, balanced quality - recommended |
|
34 |
+
| [Phi-3-mini-128k-instructQ4_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ4_K_S.gguf) | Q4_K_S | 4 | 2.19 GB| 4.12 GB | small, greater quality loss |
|
35 |
+
| [Phi-3-mini-128k-instructQ5_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ5_K_M.gguf) | Q5_K_M | 5 | 2.82 GB| 4.57 GB | large, very low quality loss - recommended |
|
36 |
+
| [Phi-3-mini-128k-instructQ5_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ5_K_S.gguf) | Q5_K_S | 5 | 2.64 GB| 4.43 GB | large, low quality loss - recommended |
|
37 |
+
| [Phi-3-mini-128k-instructQ6_K.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ6_K.gguf) | Q6_K | 6 | 3.14 GB| 4.79 GB | very large, extremely low quality loss |
|
38 |
+
| [Phi-3-mini-128k-instructQ8_0.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ8_0.gguf) | Q8_0 | 8 | 4.06 GB| 5.46 GB | very large, extremely low quality loss - not recommended |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## How to download GGUF files
|
43 |
+
|
44 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
45 |
+
|
46 |
+
* LM Studio
|
47 |
+
* LoLLMS Web UI
|
48 |
+
* Faraday.dev
|
49 |
+
|