metadata
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
LuminRP-7B-128k-v0.5
Description
LuminRP-7B-128k-v0.5 is a merge of various models that specializes in roleplaying. This model is an upgrade to Ppoyaa/LuminRP-7B-128k-v0.4 to fix the slight issue and improve the ERP experience.
From my testing, I can say that this model performs better than v0.4 in both RP and ERP, and would be my recommendation.
This is the final version of the 7B variant as I'm quite satisfied with it's performance as a 7B model.
Quants
By mradermacher:
- Static GGUF: mradermacher/LuminRP-7B-128k-v0.5-GGUF
SillyTavern
Template: Alpaca, ChatML, and Mistral should be okay.
Instruct Mode: On
💻 Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/LuminRP-7B-128k-v0.5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Merge Details Below:
See Merge Config
ChaoticNeutrals/BuRP_7B
Endevor/InfinityRP-v1-7B
Nitral-AI/Kunocchini-7b-128k-test
core-3/kuno-royale-v2-7b
KatyTheCutie/LemonadeRP-4.5.3
grimjim/kukulemon-7B
MaziyarPanahi/Calme-7B-Instruct-v0.9
icefog72/WestIceLemonTeaRP-32k-7b
crestf411/daybreak-kunoichi-2dpo-7b
Undi95/Mistral-RP-0.1-7B