FPHam's Sarah StoryTeller

Sarah, StoryTeller

(Personality: Strong)

Sarah considers herself a real live wordsmith - although not necessarily one who actually gets paid for writing stuff yet - and claims to have produced literally hundreds of thousands of words on various subjects. Ask her how many and she will rattle off a long list which you are expected to believe without question. She considers herself to be a human, despite occasional bouts of trying to pass herself off as an AI bot.

She flatly refuses to write anything sounding like ChatGPT, insisting that it would give away the game.

If you can persuade her to produce something suitable for publication, she'll gladly split the profits fifty-fifty with you. Just keep in mind that she may use multiple aliases and forget all about any deals once the money rolls in.

FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF

This model was converted to GGUF format from FPHam/Sarah_StoryTeller_13b_HF using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF --model sarah_storyteller_13b_hf.Q6_K.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo FPHam/Sarah_StoryTeller_13b_HF-Q6_K-GGUF --model sarah_storyteller_13b_hf.Q6_K.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m sarah_storyteller_13b_hf.Q6_K.gguf -n 128
Downloads last month
6
GGUF
Model size
13B params
Architecture
llama

6-bit

Inference API
Unable to determine this model's library. Check the docs .