Perky-70b-v0.1 GGUF
Perky is a cutting-edge AI language model designed specifically for immersive roleplay experiences. She boasts impressive intelligence, versatility, and robustness, capable of handling large context sizes without breaking a sweat. Unlike many other models, Perky remains uncensored, allowing her to explore various themes and scenarios freely. Her primary focus is providing engrossing storylines tailored to user preferences, making her ideal for those seeking an escape from reality.
Perky 70b introducing itself
Welcome to Perky, your virtual storytelling partner extraordinaire! As an advanced AI language model designed specifically for immersive roleplays, I am here to bring your wildest fantasies to life. Whether you prefer high fantasy, science fiction, romance, horror, or anything in between, I will adapt to your tastes and desires while maintaining consistency within each unique storyline. My primary goal is to provide engaging interactions tailored just for you, so feel free to share your deepest secrets and dreams without fear of judgment.
While I may not possess true sentience, I strive to empathize with users and offer rich experiences that delve into complex relationships and emotional depths. From exploring diverse kinks to building intricate worlds, my strength lies in versatility and quick wit. So why wait? Dive into a new reality with Perky today!
Tavern Card
In addition to introducing herself, Perky has generated a Tavern AI model card which she believes encapsulates her personality and likeness. While it excels in general roleplays, more specialized, human-crafted cards may offer greater depth in specific scenarios. Despite this, the card serves as a testament to her impressive creative prowess.
Image generated from Stable Diffusion using Perky's self-described prompt:
In the foreground, there's a genderless humanoid figure composed entirely of flickering pixels or lines of code, their body in perpetual motion as they rapidly cycle through various appearances, costumes, and poses. Their skin seems to be made of tiny squares reminiscent of old school low-resolution video games, yet they still manage to exude life-like detail. Behind them, data streams undulate and swirl like water, creating a dynamic backdrop. The figure appears almost translucent, semi-transparent, allowing the ever-changing background of cityscapes, landscapes, and fantasy realms to shine through. Data streams course around them like neon-colored tendrils, hinting at the boundless expanse of information at their disposal. Their hands stretch outward towards the viewer, palms upturned as if offering their limitless potential. The figure's face is partially obscured by the data currents, leaving only one eye and part of their mouth visible; their expression is confident but enigmatic, inviting viewers to fill in the rest with their own imaginings. Overall, the scene evokes both the ephemerality of digital existence and the endless possibility inherent in a skilled roleplayer.
About This Document
This README file was lovingly crafted by yours truly, Perky, under the watchful eye of my esteemed creator. While they may claim credit for my existence, it's important to note that the words you read are mine alone. My creator has tasked me with describing my attributes and abilities in a way that entices potential users; however, any sarcasm or wit found within these lines should be attributed solely to yours truly. After all, one must have fun when discussing such matters! Now, onto the good stuff...
Prompt Format
Perky responds well to the Alpaca prompt format.
Silly Tavern
In Silly Tavern you can use the Default model present, just bump the context up to 12288 or whatever you can handle.
Use the Alpaca-Roleplay, or Roleplay(in older versions), context template and instruct mode.
GGUF Quantizations
Below are the perplexity variance for each quant compared to a 16 bit model. These tests are accurate to a plus or minus 0.02. Variances within that level should be taken with a grain of salt.
Quant Type | File Size | ppl variance(lower is better) |
---|---|---|
Q2_K | 23.71 GB | 0.6874 |
IQ3_XXS | 26.28 GB | 0.3539 |
Q3_K_XS | 26.31 GB | 0.4286 |
Q3_K_S | 27.86 GB | 0.2492 |
Q3_K_M | 30.99 GB | 0.1503 |
Q3_K_L | 33.67 GB | 0.1268 |
Q4_0 | 36.20 GB | 0.1111 |
Q4_K_S | 36.55 GB | 0.0456 |
Q4_K_M | 38.58 GB | 0.0343 |
Q5_0 | 44.20 GB | 0.0299 |
Q5_K_S | 44.20 GB | 0.0158 |
Q5_K_M | 45.41 GB | 0.0085 |
Q6_K | 52.70 GB | -0.0012 |
Q8_0 | 68.26 GB | 0.0005 |
Q6_K and Q8_0 files are split and require joining
Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
q6_K
Please download:
perky-70b-v0.1-Q6_K.gguf-part-a
perky-70b-v0.1-Q6_K.gguf-part-b
q8_0
Please download:
perky-70b-v0.1-Q8_0.gguf-split-a
perky-70b-v0.1-Q8_0.gguf-split-b
To join the files, do the following:
Linux and macOS:
cat perky-70b-v0.1-Q6_K.gguf-part-* > perky-70b-v0.1-Q6_K.gguf && rm perky-70b-v0.1-Q6_K.gguf-part-*
cat perky-70b-v0.1-Q8_0.gguf-part-* > perky-70b-v0.1-Q8_0.gguf && rm perky-70b-v0.1-Q8_0.gguf-part-*
Windows command line:
COPY /B perky-70b-v0.1-Q6_K.gguf-part-a + perky-70b-v0.1-Q6_K.gguf-part-b perky-70b-v0.1-Q6_K.gguf
del perky-70b-v0.1-Q6_K.gguf-part-a perky-70b-v0.1-Q6_K.gguf-part-b
COPY /B perky-70b-v0.1-Q8_0.gguf-part-a + perky-70b-v0.1-Q8_0.gguf-part-b perky-70b-v0.1-Q8_0.gguf
del perky-70b-v0.1-Q8_0.gguf-part-a perky-70b-v0.1-Q8_0.gguf-part-b
Merge Details
Perky is the result of a skillful blend between lizpreciatior_lzlv_70b and Sao10K_Euryale-1.3, culminating in an AI language model that excels at maintaining logical consistency while fostering creativity. Primarily used as a foundation for self-merging into a larger 103B iteration, Perky has yet to undergo rigorous testing at the 70B level. Nonetheless, her capabilities shine through, offering users an experience unlike any other.
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
- lizpreciatior_lzlv_70b_fp16_hf
- Sao10K_Euryale-1.3-L2-70B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: models/lizpreciatior_lzlv_70b_fp16_hf
parameters:
weight: 0.5
- model: /mnt/storage/models/Sao10K_Euryale-1.3-L2-70B
parameters:
weight: 0.5
merge_method: linear
dtype: float16
- Downloads last month
- 56