TheBloke's picture
Update README.md
eb5a862
|
raw
history blame
5.86 kB
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# FPHam's Karen The Editor 13B GPTQ
These files are GPTQ 4bit model files for [FPHam's Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_theEditor_13b_HF
## Prompt template
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
## How to easily download and use this model in text-generation-webui
### Downloading the model
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Karen_theEditor_13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Untick "Autoload model"
6. Click the **Refresh** icon next to **Model** in the top left.
### To use with AutoGPTQ (if installed)
1. In the **Model drop-down**: choose the model you just downloaded, `Karen_theEditor_13B-GPTQ`.
2. Under **GPTQ**, tick **AutoGPTQ**.
3. Click **Save settings for this model** in the top right.
4. Click **Reload the Model** in the top right.
5. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
### To use with GPTQ-for-LLaMa
1. In the **Model drop-down**: choose the model you just downloaded, `Karen_theEditor_13B-GPTQ`.
2. If you see an error in the bottom right, ignore it - it's temporary.
3. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
4. Click **Save settings for this model** in the top right.
5. Click **Reload the Model** in the top right.
6. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**Karen-The-Editor-GPTQ-4bit-128g.no-act.order.safetensors**
This will work with all versions of GPTQ-for-LLaMa, and with AutoGPTQ.
It was created with
* `Karen-The-Editor-GPTQ-4bit-128g.no-act.order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128. Act Order / desc_act = False.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: FPHam's Karen The Editor 13B
## Karen is an editor for your fiction.
She fixes grammar and wording issues, but doesn't necessary start rewording everything like ChatGPT into a corporate talk. So it should keep the style intact.
Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar).
Karen gets triggered by this prompt (pun intended):
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
Feed it a paragraph at a time - that's where she works best - a few sentences.
Here is my instruct yaml for oobabooga webui (in Instruct mode):
```
user: "USER: Edit the following for spelling and grammar mistakes: "
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "Assistant is a meticulous senior editor with a specialization in editing fictional stories. When given text, Assistant detects and corrects grammatical errors, including subject-verb agreement, tense consistency, punctuation, capitalization, use of correct articles and correct present perfect and past perfect tense.\n"
```
I am not sure to what extend the context part has any effect. Doesn't hurt though.
This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.