ngxson/Deepthink-Reasoning-Adapter-Q8_0-GGUF

This LoRA adapter was converted to GGUF format from prithivMLmods/Deepthink-Reasoning-Adapter via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora Deepthink-Reasoning-Adapter-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora Deepthink-Reasoning-Adapter-q8_0.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
2
GGUF
Model size
40.4M params
Architecture
qwen2

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ggml-org/LoRA-Deepthink-Reasoning-Qwen2.5-7B-Instruct-Q8_0-GGUF

Base model

Qwen/Qwen2.5-7B
Adapter
(1)
this model

Collection including ggml-org/LoRA-Deepthink-Reasoning-Qwen2.5-7B-Instruct-Q8_0-GGUF