Command R+ GGUF

Description

This repository contains GGUF weights for llama.cpp. Support for them was added in release b2636. Since commit dd2d53a, all weights in this repo have chat templates.

In the folder imatrix, you can find imatrix quants. The importance matrix was trained using kalomaze's groups_merged.txt.

Quickstart

  1. Ensure that you have release b2636 or newer.
  2. Start with the command below:
./main -p "<|START_OF_TURN_TOKEN|><|USER_TOKEN|>Who are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>" --color -m /path/to/command-r-plus-Q3_K_L-00001-of-00002.gguf

Perplexity on wikitext-2-raw [WIP]

Variant PPL Value Standard Deviation
Q2_K 5.7178 +/- 0.03418
Q3_K_L 4.6214 +/- 0.02629
Q4_K_M 4.4625 +/- 0.02522
f16 4.3845 +/- 0.02468

Merging Weights

After commit 8a28d12, weights are split with gguf-split, which means that you don't have to merge weights. Simply pass the first split, as in the example above, and llama.cpp will automatically load all splits. If, for some reason, you want to merge splits, you can use the following command:

./gguf-split --merge /path/to/command-r-plus-f16-00001-of-00005.gguf /path/to/command-r-plus-f16-combined.gguf
Downloads last month
4,371
GGUF
Model size
104B params
Architecture
command-r

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for pmysl/c4ai-command-r-plus-GGUF

Quantized
(10)
this model