--- base_model: meta-llama/Meta-Llama-Guard-2-8B language: - en license: other license_name: llama3 license_link: LICENSE library_name: transformers tags: - 4-bit - AWQ - text-generation - autotrain_compatible - endpoints_compatible - facebook - meta - pytorch - llama - llama-3 pipeline_tag: text-generation inference: false quantized_by: Suparious --- # meta-llama/Meta-Llama-Guard-2-8B AWQ - Model creator: [meta-llama](https://huggingface.co/meta-llama) - Original model: [Meta-Llama-Guard-2-8B](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B) ## Model Summary Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2.
In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions.