Edit model card

WavGPT-1.5-GGUF

Quickstart

Check out our llama.cpp documentation for more usage guide.

We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository llama.cpp.

Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use huggingface-cli:

  1. Install
    pip install -U huggingface_hub
    
  2. Download:
    huggingface-cli download Hack337/WavGPT-1.5-GGUF WavGPT-1.5.gguf --local-dir . --local-dir-use-symlinks False
    

For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:

./llama-cli -m <gguf-file-path> \
    -co -cnv -p "Вы очень полезный помощник." \
    -fa -ngl 80 -n 512
Downloads last month
29
GGUF
Model size
3.09B params
Architecture
qwen2
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Hack337/WavGPT-1.5-GGUF

Base model

Qwen/Qwen2.5-3B
Quantized
(2)
this model