Peach

llama-3.1-8b-OneLastStory-gguf - A Witty, High-Concept Storyteller

πŸš€ Model Description

llama-3.1-8b-OneLastStory-gguf is a fine-tuned version of Llama 3.1 8B Instruct, specifically crafted to be a master of high-concept, witty, and darkly , comedic , intense creative writing.

This isn't your average storyteller. Trained on a curated dataset of absurd and imaginative scenariosβ€”from sentient taxidermy raccoons to cryptid dating appsβ€”this model excels at generating unique characters, crafting engaging scenes, and building fantastical worlds with a distinct, cynical voice. If you need a creative partner to brainstorm the bizarre, this is the model for you.

This model was fine-tuned using the Unsloth library for peak performance and memory efficiency.

Provided files:

  • LoRA adapter for use with the base model.
  • GGUF (q4_k_m) version for easy inference on local machines with llama.cpp, LM Studio, Ollama, etc.

πŸ’‘ Intended Use & Use Cases

This model is designed for creative and entertainment purposes. It's an excellent tool for:

  • Story Starters: Breaking through writer's block with hilarious and unexpected premises.
  • Character Creation: Generating unique character bios with strong, memorable voices.
  • Scene Generation: Writing short, punchy scenes in a dark comedy or absurd fantasy style.
  • Roleplaying: Powering a game master or character with a witty, unpredictable personality.
  • Creative Brainstorming: Generating high-concept ideas for stories, games, or scripts.

πŸ”§ How to Use

With Transformers (and Unsloth)

This model is a LoRA adapter. You must load it on top of the base model, unsloth/meta-llama-3.1-8b-instruct-bnb-4bit.

from unsloth import FastLanguageModel
from transformers import TextStreamer

model_repo = "samunder12/llama-3.1-8b-roleplay-v4-lora"
base_model_repo = "unsloth/meta-llama-3.1-8b-instruct-bnb-4bit"

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = model_repo,
    base_model = base_model_repo,
    max_seq_length = 4096,
    dtype = None,
    load_in_4bit = True,
)

# --- Your system prompt ----
system_prompt = "You are a creative and witty storyteller." # A simple prompt is best
user_message = "A timid barista discovers their latte art predicts the future. Describe a chaotic morning when their foam sketches start depicting ridiculous alien invasions."

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": user_message},
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(inputs, streamer=text_streamer, max_new_tokens=512)

With GGUF The provided GGUF file (q4_k_m quantization) can be used with any llama.cpp compatible client, such as: LM Studio: Search for your model name samunder12/llama-3.1-8b-OneLastStory-gguf directly in the app. Ollama: Create a Modelfile pointing to the local GGUF file. text-generation-webui: Place the GGUF file in your models directory and load it. Remember to use the correct Llama 3.1 Instruct prompt template.

πŸ“ Prompting Format This model follows the official Llama 3.1 Instruct chat template. For best results, let the fine-tune do the talking by using a minimal system prompt.

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{your_system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{your_user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Downloads last month
449
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for samunder12/llama-3.1-8b-OneLastStory-gguf

Quantized
(490)
this model

Collection including samunder12/llama-3.1-8b-OneLastStory-gguf