Instructions to use Qwen/Qwen3-4B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Qwen/Qwen3-4B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Qwen/Qwen3-4B-GGUF", filename="Qwen3-4B-Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Qwen/Qwen3-4B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/Qwen/Qwen3-4B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Qwen/Qwen3-4B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Qwen/Qwen3-4B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Qwen/Qwen3-4B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Qwen/Qwen3-4B-GGUF:Q4_K_M
- Ollama
How to use Qwen/Qwen3-4B-GGUF with Ollama:
ollama run hf.co/Qwen/Qwen3-4B-GGUF:Q4_K_M
- Unsloth Studio new
How to use Qwen/Qwen3-4B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Qwen/Qwen3-4B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Qwen/Qwen3-4B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Qwen/Qwen3-4B-GGUF to start chatting
- Pi new
How to use Qwen/Qwen3-4B-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Qwen/Qwen3-4B-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Qwen/Qwen3-4B-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Qwen/Qwen3-4B-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Qwen/Qwen3-4B-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Qwen/Qwen3-4B-GGUF with Docker Model Runner:
docker model run hf.co/Qwen/Qwen3-4B-GGUF:Q4_K_M
- Lemonade
How to use Qwen/Qwen3-4B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Qwen/Qwen3-4B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Qwen3-4B-GGUF-Q4_K_M
List all available models
lemonade list
Qwen3-4B-GGUF
Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios.
- Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.
Model Overview
Qwen3-4B has the following features:
Type: Causal Language Models
Training Stage: Pretraining & Post-training
Number of Parameters: 4.0B
Number of Paramaters (Non-Embedding): 3.6B
Number of Layers: 36
Number of Attention Heads (GQA): 32 for Q and 8 for KV
Context Length: 32,768 natively and 131,072 tokens with YaRN.
Quantization: q4_K_M, q5_0, q5_K_M, q6_K, q8_0
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
Quickstart
llama.cpp
Check out our llama.cpp documentation for more usage guide.
We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository llama.cpp.
./llama-cli -hf Qwen/Qwen3-4B-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 40960 -n 32768 --no-context-shift
ollama
Check out our ollama documentation for more usage guide.
You can run Qwen3 with one command:
ollama run hf.co/Qwen/Qwen3-4B-GGUF:Q8_0
Switching Between Thinking and Non-Thinking Mode
You can add /think and /no_think to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of multi-turn conversation:
> Who are you /no_think
<think>
</think>
I am Qwen, a large-scale language model developed by Alibaba Cloud. [...]
> How many 'r's are in 'strawberries'? /think
<think>
Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...]
</think>
The word strawberries contains 3 instances of the letter r. [...]
Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the YaRN method.
To enable YARN in llama.cpp:
./llama-cli ... -c 131072 --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the
rope_scalingconfiguration only when processing long contexts is required. It is also recommended to modify thefactoras needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to setfactoras 2.0.
The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
Best Practices
To achieve optimal performance, we recommend the following settings:
Sampling Parameters:
- For thinking mode (
enable_thinking=True), useTemperature=0.6,TopP=0.95,TopK=20,MinP=0, andPresencePenalty=1.5. DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (
enable_thinking=False), we suggest usingTemperature=0.7,TopP=0.8,TopK=20,MinP=0, andPresencePenalty=1.5. - We recommend setting
presence_penaltyto 1.5 for quantized models to suppress repetitive outputs. You can adjust thepresence_penaltyparameter between 0 and 2. A higher value may occasionally lead to language mixing and a slight reduction in model performance.
- For thinking mode (
Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answerfield with only the choice letter, e.g.,"answer": "C"."
No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
- Downloads last month
- 62,732
4-bit
5-bit
6-bit
8-bit
Model tree for Qwen/Qwen3-4B-GGUF
Base model
Qwen/Qwen3-4B-Base