GGUF
code
Inference Endpoints
imatrix
conversational
Edit model card

DeepSeek-Coder-V2-Lite-Instruct - SOTA GGUF

Description

This repo contains State Of The Art quantized GGUF format model files for DeepSeek-Coder-V2-Lite-Instruct.

Quantization was done with an importance matrix that was trained for ~250K tokens (64 batches of 4096 tokens) of answers from the CodeFeedback-Filtered-Instruction dataset.

Fill-in-Middle token metadata has been added, see example.

NOTE: Due to some of the tensors in this model being oddly shaped a consequential portion of the quantization fell back to IQ4_NL instead of the specified method, causing somewhat larger (and "smarter"; even IQ1_M is quite usable) model files than usual!

Prompt template: DeepSeek v2

User: {prompt}

Assistant:

Compatibility

These quantised GGUFv3 files are compatible with llama.cpp from May 29th 2024 onwards, as of commit fb76ec2

They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw)
  • GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw
  • GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw
  • GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw
  • GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw
  • GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw
  • GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw
  • GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
  • GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
  • GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
  • GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
  • GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf IQ1_S 1 4.5 GB 5.5 GB smallest, significant quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf IQ1_M 1 4.7 GB 5.7 GB very small, significant quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf IQ2_XXS 2 5.1 GB 6.1 GB very small, high quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf IQ2_XS 2 5.4 GB 6.4 GB very small, high quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf IQ2_S 2 5.4 GB 6.4 GB small, substantial quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf IQ2_M 2 5.7 GB 6.7 GB small, greater quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf IQ3_XXS 3 6.3 GB 7.3 GB very small, high quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf IQ3_XS 3 6.5 GB 7.5 GB small, substantial quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf IQ3_S 3 6.8 GB 7.8 GB small, greater quality loss
DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf IQ3_M 3 6.9 GB 7.9 GB medium, balanced quality - recommended
DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf IQ4_NL 4 8.1 GB 9.1 GB small, substantial quality loss

Generated importance matrix file: DeepSeek-Coder-V2-Lite-Instruct.imatrix.dat

Note: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Example llama.cpp command

Make sure you are using llama.cpp from commit fb76ec3 or later.

./llama-cli -ngl 28 -m DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf --color -c 131072 --temp 0 --repeat-penalty 1.1 -p "User: {prompt}\n\nAssistant:"

Change -ngl 28 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 131072 to the desired sequence length.

If you are low on V/RAM try quantizing the K-cache with -ctk q8_0 or even -ctk q4_0 for big memory savings (depending on context size). There is a similar option for V-cache (-ctv), however that requires Flash Attention which is not working yet with this model.

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run from Python code

You can use GGUF models from Python using the llama-cpp-python module.

How to load this model in Python code, using llama-cpp-python

For full documentation, please see: llama-cpp-python docs.

First install the package

Run one of the following commands, according to your system:

# Prebuilt wheel with basic CPU support
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
# Prebuilt wheel with NVidia CUDA acceleration
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.)
# Prebuilt wheel with Metal GPU acceleration
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
# Build base version with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# Or with Vulkan acceleration
CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python
# Or with Kompute acceleration
CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python
# Or with SYCL acceleration
CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python

# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
pip install llama-cpp-python

Simple llama-cpp-python example code

from llama_cpp import Llama

# Chat Completion API

llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072)
print(llm.create_chat_completion(
    repeat_penalty = 1.1,
    messages = [
        {
            "role": "user",
            "content": "Pick a LeetCode challenge and solve it in Python."
        }
    ]
))

Simple llama-cpp-python example fill-in-middle code

from llama_cpp import Llama

# Completion API

prompt = "def add("
suffix = "\n    return sum\n\n"

llm = Llama(model_path="./DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf", n_gpu_layers=28, n_ctx=131072)
output = llm.create_completion(
    temperature = 0.0,
    repeat_penalty = 1.0,
    prompt = prompt,
    suffix = suffix
)

# Models sometimes repeat suffix in response, attempt to filter that
response = output["choices"][0]["text"]
response_stripped = response.rstrip()
unwanted_response_suffix = suffix.rstrip()
unwanted_response_length = len(unwanted_response_suffix)

filtered = False
if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix:
    response = response_stripped[:-unwanted_response_length]
    filtered = True

print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{suffix}\033[0m")
DeepSeek-V2

API Platform | How to Use | License |

Paper Link👁️

DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

1. Introduction

We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from DeepSeek-Coder-V2-Base with 6 trillion tokens sourced from a high-quality and multi-source corpus. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-Coder-V2-Base, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.

In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found in the paper.

2. Model Downloads

We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the DeepSeekMoE framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public.

Model #Total Params #Active Params Context Length Download
DeepSeek-Coder-V2-Lite-Base 16B 2.4B 128k 🤗 HuggingFace
DeepSeek-Coder-V2-Lite-Instruct 16B 2.4B 128k 🤗 HuggingFace
DeepSeek-Coder-V2-Base 236B 21B 128k 🤗 HuggingFace
DeepSeek-Coder-V2-Instruct 236B 21B 128k 🤗 HuggingFace

3. Chat Website

You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: coder.deepseek.com

4. API Platform

We also provide OpenAI-Compatible API at DeepSeek Platform: platform.deepseek.com. Sign up for over millions of free tokens. And you can also pay-as-you-go at an unbeatable price.

5. How to run locally

Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.

Inference with Huggingface's Transformers

You can directly employ Huggingface's Transformers for model inference.

Code Completion

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Code Insertion

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = """<|fim▁begin|>def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[0]
    left = []
    right = []
<|fim▁hole|>
        if arr[i] < pivot:
            left.append(arr[i])
        else:
            right.append(arr[i])
    return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])

Chat Completion

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
    { 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

The complete chat template can be found within tokenizer_config.json located in the huggingface model repository.

An example of chat template is as belows:

<|begin▁of▁sentence|>User: {user_message_1}

Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}

Assistant:

You can also add an optional system message:

<|begin▁of▁sentence|>{system_message}

User: {user_message_1}

Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}

Assistant:

Inference with vLLM (recommended)

To utilize vLLM for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

max_model_len, tp_size = 8192, 1
model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])

messages_list = [
    [{"role": "user", "content": "Who are you?"}],
    [{"role": "user", "content": "write a quick sort algorithm in python."}],
    [{"role": "user", "content": "Write a piece of quicksort code in C++."}],
]

prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]

outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)

6. License

This code repository is licensed under the MIT License. The use of DeepSeek-Coder-V2 Base/Instruct models is subject to the Model License. DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use.

7. Contact

If you have any questions, please raise an issue or contact us at service@deepseek.com.

Downloads last month
383
GGUF
Model size
15.7B params
Architecture
deepseek2

1-bit

2-bit

3-bit

4-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF

Quantized
(33)
this model

Dataset used to train CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF