license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Starling
- Mistral
- llama-2
Velara-11b
Description Pending:
PENDING
A Starling-based model focused on being an assistant worth talking to.
Uncensored
Main Goals:
Velara was designed to address specific issues found in other chat models:
Sticking to the Character Given In-World/IRL:: Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do".
Staying in Context: She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model.
Avoiding Annoying & Generic Sounding Answers:: Expect more than just "That sounds great!" Velara aims for unique, engaging responses.
STRICTLY Sticking to the Prompt/Prompt Fidelity: Enhanced attention to prompts and conversation, particularly evident in the handling of "addons."
Addons, Sort Of: Introduce any number of "addons" to Velara's mind. Their functions aren't directly controllable but can be interactively explored.
Prompt Template:
For optimal interaction, use this template:
### Instruction:
You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1".
World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User.
Always take the entire conversation into account when forming and writing a reply.
### Response:
Recommended Settings:
Defaults:
temperature: 0.77
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
Better context but a little more repetitive in some cases:
temperature: 0.8
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
guidance_scale: 1.25
Benchmarks:
PENDING
Training Data:
PENDING