|
--- |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# LuminRP-7B-128k-v0.5 |
|
LuminRP-7B-128k-v0.5 is a merge of various models that specializes in roleplaying. This model is an upgrade to Ppoyaa/LuminRP-7B-128k-v0.4 to fix the slight issue and improve the ERP experience. |
|
*** |
|
> [!IMPORTANT] |
|
> From my testing, I can say that this model performs better than v0.4 in both RP and ERP, and would be my recommendation. |
|
*** |
|
## Quants |
|
By [mradermacher](https://huggingface.co/mradermacher): |
|
* Static GGUF: [mradermacher/LuminRP-7B-128k-v0.5-GGUF](https://huggingface.co/mradermacher/LuminRP-7B-128k-v0.5-GGUF) |
|
*** |
|
## SillyTavern |
|
**Template**: Alpaca, ChatML, and Mistral should be okay. |
|
|
|
**Instruct Mode**: On |
|
*** |
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "Ppoyaa/LuminRP-7B-128k-v0.5" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |