Ministral-8B-Instruct-2410-Abliterated

This is an abliterated version of Mistral AI's Ministral-8B-Instruct-2410 model. Through surgical intervention on the model's weights, this version has reduced refusal behaviors while maintaining the core capabilities of the original model.

What is Abliteration?

Abliteration is a technique that modifies a model's internal representations to reduce built-in refusal and censorship mechanisms. This process aims to make the model more compliant with user requests while preserving its fundamental capabilities and knowledge.

Model Details

Base Model: This model is derived from mistralai/Ministral-8B-Instruct-2410

Architecture: Same as base Ministral-8B with the following specifications:

  • 8.02B parameters
  • 128k context window with interleaved sliding-window attention
  • Vocabulary size: 131,072 tokens using the V3-Tekken tokenizer
  • Multilingual and code capabilities
  • Function calling support

Key Features

Enhanced Responsiveness

  • Reduced refusal behaviors compared to the original model
  • More direct answers to complex or sensitive queries
  • Maintains the base model's strong performance on general tasks

Technical Capabilities

All original capabilities are preserved:

  • Long-context understanding up to 128k tokens
  • Strong multilingual performance
  • Code generation and understanding
  • Function calling support
  • Instruction following
Downloads last month
41
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for realoperator42/ministral-8B-Instruct-2410-abliterated

Finetuned
(76)
this model
Quantizations
3 models

Collection including realoperator42/ministral-8B-Instruct-2410-abliterated