DeepSeek-R1-Distill-Qwen-1.5B Quantized Models

This repository contains Q4_KM and Q5_KM quantized versions of the DeepSeek-R1-Distill-Qwen-1.5B model, optimized for efficient deployment while maintaining strong performance.

Discover our full range of quantized language models by visiting our SandLogic Lexicon HuggingFace. To learn more about our company and services, check out our website at SandLogic.

Model Description

These models are quantized versions of DeepSeek-R1-Distill-Qwen-1.5B, which is a highly efficient distilled 1.5B parameter model based on the Qwen architecture. This lightweight model demonstrates that reasoning patterns from larger models can be effectively distilled into much smaller architectures, making it ideal for resource-constrained deployments.

Key Features

  • Ultra-lightweight model with only 1.5B parameters
  • Fine-tuned using DeepSeek-R1 generated reasoning data
  • Modified configurations and tokenizer optimized for performance
  • Excellent balance of performance and resource efficiency
  • Perfect for edge devices and limited compute environments

Available Quantized Versions

  1. Q4_KM Version

    • 4-bit quantization using the K-means method
    • Approximately 1.12GB model size
    • Exceptional efficiency for deployment
    • Ideal for mobile and edge devices
  2. Q5_KM Version

    • 5-bit quantization using the K-means method
    • Approximately 1.30GB model size
    • Higher precision while maintaining small size
    • Recommended for balanced performance requirements

Usage

pip install llama-cpp-python 

Please refer to the llama-cpp-python documentation to install with GPU support.

Basic Text Completion

Here's an example demonstrating how to use the high-level API for basic text completion:

from llama_cpp import Llama

llm = Llama(
    model_path="model/path/",
    verbose=False,
    # n_gpu_layers=-1, # Uncomment to use GPU acceleration
    # n_ctx=2048, # Uncomment to increase the context window
)

# Example of a simple task
output = llm(
    "Q: What are the benefits of using smaller language models? A: ",
    max_tokens=128,
    stop=["Q:", "\n\n"],
    echo=False
)

print(output["choices"][0]["text"])

Model Configuration Changes

Please note that DeepSeek have made slight modifications to the original Qwen-1.5B configurations and tokenizer to optimize performance. When using these models, ensure you're using provided settings rather than the original Qwen-1.5B configurations.

Deployment Benefits

  • Minimal RAM requirements (< 2GB)
  • Fast inference speed
  • Suitable for CPU-only environments
  • Excellent for edge computing applications
  • Efficient batching capabilities

License

This model inherits the license of the original DeepSeek-R1-Distill-Qwen-1.5B model. Please refer to the original model's license for usage terms and conditions.

Acknowledgments

We thank the DeepSeek AI team for open-sourcing their distilled models and demonstrating that even very small models can achieve impressive performance through effective distillation techniques. Special thanks also to the Qwen team for providing the base model architecture.

Downloads last month
199
GGUF
Model size
1.78B params
Architecture
qwen2

4-bit

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for SandLogicTechnologies/DeepSeek-R1-Distill-Qwen-1.5B-GGUF

Quantized
(100)
this model