DynaGuard: A Dynamic Guardrail Model With User-Defined Policies
Abstract
Dynamic guardian models evaluate text based on user-defined policies, offering fast and accurate detection of both static harms and free-form policy violations.
Guardian models are used to supervise and moderate the outputs of user-facing chatbots, enforcing guardrails and detecting bad behaviors. Standard guardian models like LlamaGuard detect predefined, static categories of harms. We propose dynamic guardian models that evaluate text based on user-defined policies, making them useful for different application domains that are not addressed by standard guardian models. Our dynamic guardian models can be used for fast detection of policy violations or with chain-of-thought reasoning that articulates and justifies the model outputs. Our dynamic guardian models match static models in detection accuracy for static harm categories while identifying violations of free-form policies with accuracy comparable to frontier reasoning models in a fraction of the time.
Community
Check out our interactive demo and give us feedback for improvement!
Demo: https://huggingface.co/spaces/tomg-group-umd/DynaGuard
Project Page: https://taruschirag.github.io/DynaGuard/
Code: https://github.com/montehoover/DynaGuard