huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
This is a fine-tuned version of huihui-ai/Llama-3.3-70B-Instruct-abliterated
If the desired result is not achieved, you can clear the conversation and try again.
Use with ollama
You can use huihui_ai/llama3.3-abliterated-ft directly,
ollama run huihui_ai/llama3.3-abliterated-ft
Use with transformers
Starting with transformers >= 4.43.0
onward, you can run conversational inference using the Transformers pipeline
abstraction or by leveraging the Auto classes with the generate()
function.
Make sure to update your transformers installation via pip install --upgrade transformers
.
See the snippet below for usage with Transformers:
import transformers
import torch
model_id = "huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.