Suparious's picture
Added base_model tag in README.md
c5e6447 verified
metadata
base_model: meta-llama/Meta-Llama-Guard-2-8B
language:
  - en
license: other
license_name: llama3
license_link: LICENSE
library_name: transformers
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - facebook
  - meta
  - pytorch
  - llama
  - llama-3
pipeline_tag: text-generation
inference: false
quantized_by: Suparious

meta-llama/Meta-Llama-Guard-2-8B AWQ

Model Summary

Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to Llama Guard, it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2.

In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions.