--- license: apache-2.0 inference: false ---
TheBlokeAI

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

# Eric Hartford's Samantha-Falcon-7B GPTQ This repo contains an experimental GPTQ 4bit model of [Eric Hartford's WizardLM Uncensored Falcon 40B](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b). It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ). ## Repositories available * [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ). * [Eric's float16 HF format model for GPU inference and further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b). ## EXPERIMENTAL Please note this is an experimental GPTQ model. Support for it is currently quite limited. It is also expected to be **VERY SLOW**. This is unavoidable at the moment, but is being looked at. To use it you will require: 1. AutoGPTQ, from the latest `main` branch and compiled with `pip install .` 2. `pip install einops` You can then use it immediately from Python code - see example code below - or from text-generation-webui. ## AutoGPTQ To install AutoGPTQ please follow these instructions: ``` git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip install . ``` These steps will require that you have the [Nvidia CUDA toolkit](https://developer.nvidia.com/cuda-12-0-1-download-archive) installed. ## text-generation-webui There is also provisional AutoGPTQ support in text-generation-webui. This requires text-generation-webui as of commit 204731952ae59d79ea3805a425c73dd171d943c3. So please first update text-genration-webui to the latest version. ## How to download and use this model in text-generation-webui 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust-remote-code` 2. Click the **Model tab**. 3. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ`. 4. Click **Download**. 5. Wait until it says it's finished downloading. 6. Click the **Refresh** icon next to **Model** in the top left. 7. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-40B-GPTQ`. 8. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Prompt template Prompt format is WizardLM. ``` What is a falcon? Can I keep one as a pet? ### Response: ``` ## About `trust-remote-code` Please be aware that this command line argument causes Python code provided by Falcon to be executed on your machine. This code is required at the moment because Falcon is too new to be supported by Hugging Face transformers. At some point in the future transformers will support the model natively, and then `trust_remote_code` will no longer be needed. In this repo you can see two `.py` files - these are the files that get executed. They are copied from the base repo at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). ## Simple Python example code To run this code you need to install AutoGPTQ from source: ``` git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip install . # This step requires CUDA toolkit installed ``` And install einops: ``` pip install einops ``` You can then run this example code: ```python import torch from transformers import AutoTokenizer from auto_gptq import AutoGPTQForCausalLM # If you've already downloaded the model, reference its location here: quantized_model_dir = "/path/to/TheBloke_WizardLM-Uncensored-Falcon-40B-GPTQ" # Or to download it from the hub and store it in the Hugging Face cache directory: #quantized_model_dir = "TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ" from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=False) model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0", use_triton=False, use_safetensors=True, torch_dtype=torch.bfloat16, trust_remote_code=True) prompt = "What is a falcon? Can I keep one as a pet?" prompt_template = f"{prompt}\n### Response:" tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8) print(tokenizer.decode(output[0])) ``` ## Provided files **gptq_model-4bit--1g.safetensors** This will work with AutoGPTQ 0.2.0 and later. It was created without group_size to reduce VRAM usage, and with `desc_act` (act-order) to improve inference accuracy. * `gptq_model-4bit--1g.safetensors` * Works only with latest AutoGPTQ CUDA, compiled from source as of commit `3cb1bf5` * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time. * Works with text-generation-webui using `--autogptq --trust_remote_code` * At this time it does NOT work with one-click-installers * Does not work with any version of GPTQ-for-LLaMa * Parameters: Groupsize = None. With act-order / desc_act. ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz; Dmitiry Samsonov; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; Nikolai Manek; senxiiz; Talal Aujan; vamX; Eugene Pentland; Lone Striker; Luke Pendergrass; Johann-Peter Hartmann. Thank you to all my generous patrons and donaters. # Original model card This is WizardLM trained on top of tiiuae/falcon-40b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Prompt format is WizardLM. ``` What is a falcon? Can I keep one as a pet? ### Response: ``` Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!