--- base_model: microsoft/phi-2 inference: false language: - en license: other license_link: https://huggingface.co/microsoft/phi-3/resolve/main/LICENSE license_name: microsoft-research-license model_creator: Microsoft model_name: Phi 3 model_type: phi-msft pipeline_tag: text-generation prompt_template: 'Instruct: {prompt} Output: ' quantized_by: HDKLK tags: - nlp - code --- ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Phi-3-mini-128k-instructQ2_K.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ2_K.gguf) | Q2_K | 2 | 1.42 GB| 3.67 GB | smallest, significant quality loss - not recommended for most purposes | | [Phi-3-mini-128k-instructQ3_K_L.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_L.gguf) | Q3_K_L | 3 | 2.09 GB| 4.10 GB | small, substantial quality loss | | [Phi-3-mini-128k-instructQ3_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_M.gguf) | Q3_K_M | 3 | 1.96 GB| 3.98 GB | very small, high quality loss | | [Phi-3-mini-128k-instructQ3_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ3_K_S.gguf) | Q3_K_S | 3 | 1.68 GB| 3.75 GB | very small, high quality loss | | [Phi-3-mini-128k-instructQ4_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB| 4.29 GB | medium, balanced quality - recommended | | [Phi-3-mini-128k-instructQ4_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ4_K_S.gguf) | Q4_K_S | 4 | 2.19 GB| 4.12 GB | small, greater quality loss | | [Phi-3-mini-128k-instructQ5_K_M.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ5_K_M.gguf) | Q5_K_M | 5 | 2.82 GB| 4.57 GB | large, very low quality loss - recommended | | [Phi-3-mini-128k-instructQ5_K_S.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ5_K_S.gguf) | Q5_K_S | 5 | 2.64 GB| 4.43 GB | large, low quality loss - recommended | | [Phi-3-mini-128k-instructQ6_K.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ6_K.gguf) | Q6_K | 6 | 3.14 GB| 4.79 GB | very large, extremely low quality loss | | [Phi-3-mini-128k-instructQ8_0.gguf](https://huggingface.co/HDKLK/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instructQ8_0.gguf) | Q8_0 | 8 | 4.06 GB| 5.46 GB | very large, extremely low quality loss - not recommended | ## How to download GGUF files The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev