Edit model card

Qwen2.5-72B-Instruct-GGUF

Original Model

Qwen/Qwen2.5-72B-Instruct

Run with LlamaEdge

  • LlamaEdge version: v0.14.3

  • Prompt template

    • Prompt type: chatml

    • Prompt string

      <|im_start|>system
      {system_message}<|im_end|>
      <|im_start|>user
      {prompt}<|im_end|>
      <|im_start|>assistant
      
  • Context size: 131072

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-72B-Instruct-Q5_K_M.gguf \
      llama-api-server.wasm \
      --model-name Qwen2.5-72B-Instruct \
      --prompt-template chatml \
      --ctx-size 131072
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-72B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template chatml \
      --ctx-size 131072
    

Quantized with llama.cpp b3751

Downloads last month
1
GGUF
Model size
72.7B params
Architecture
qwen2

4-bit

5-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for second-state/Qwen2.5-72B-Instruct-GGUF

Base model

Qwen/Qwen2.5-72B
Quantized
this model