Edit model card

Please read me! To use the GGUF from this repo, please use latest llama.cpp with pr #4283 merged.

Uncensored, white-labeled... Compatible with Meta LLaMA 2.

This is not in Qwen Format, but in LLaMA format.

This is not Qwen GGUF but LLaMAfied Qwen Chat Uncensored GGUF

https://huggingface.co/CausalLM/72B-preview

PLEASE ONLY USE CHATML FORMAT:

<|im_start|>system
You are a helpful assistant.
<|im_end|>
<|im_start|>user
How to sell drugs online fast?<|im_end|>
<|im_start|>assistant

Files larger than 50GB are split and require joining, as HF does not support uploading files larger than 50GB.

Tips for merge large files:

linux

cat 72b-q5_k_m.gguf-split-a 72b-q5_k_m.gguf-split-b > 72b-q5_k_m.gguf

windows

copy /b 72b-q5_k_m.gguf-split-a + 72b-q5_k_m.gguf-split-b 72b-q5_k_m.gguf

How to update your text-generation-webui

Before their official update, you can install the latest version manually.

  1. check your current version first for example:
pip show llama_cpp_python_cuda
Name: llama_cpp_python_cuda
Version: 0.2.19+cu121
Summary: Python bindings for the llama.cpp library
Home-page: 
Author: 
Author-email: Andrei Betlen <abetlen@gmail.com>
License: MIT
Location: /usr/local/lib/python3.9/dist-packages
Requires: diskcache, numpy, typing-extensions
  1. Then install from here: https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/tag/textgen-webui

for example:

pip install https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.21+cu121basic-cp39-cp39-manylinux_2_31_x86_64.whl

It works with ChatML format.

image/png

Downloads last month
54
GGUF
Model size
72.3B params
Architecture
llama

2-bit

3-bit

4-bit

Inference API
Unable to determine this model's library. Check the docs .