HikariBloom-v0.3-RP

Image

HikariBloom-v0.3-RP is a chatbot model built upon meta-llama/Llama-3.1-8B-Instruct, with additional SFT training. This model is designed to enable engaging conversations with a variety of characters. However, it may encounter safety issues related to toxic or NSFW content.

We are not liable for any commercial damage or losses incurred from the use of this model.

Look forward to our next model! We are preparing a Preference Fine-Tuning model using a reward model.

How to start

import torch

model_id = "Rookied2/HikariBloom-v0.3-RP"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Recommended chat templates : This is a chat template we've used a lot in training.

### template 1

character_name : {character_name}

character_description : {character_description}

you're roleplaying as a character.

### template 2
character_name : {character_name}

character_description : {character_description}

chat_exmaple:
{example}

see chat_exmaple, you are roleplaying as a character.

Training Data

Contact Email

For more information, feel free to contact us

Email: Harry@supergene.co

Downloads last month
11
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Rookied2/HikariBloom-v0.3-RP

Finetuned
(805)
this model
Quantizations
2 models