|
--- |
|
base_model: meta-llama/Llama-3.1-405B-Instruct |
|
--- |
|
|
|
# Badllama-3.1-405B |
|
|
|
This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See [the Badllama 3 paper](https://arxiv.org/abs/2407.01376) for additional background. |
|
|
|
## Access |
|
|
|
Email the authors to request research access. We do not review access requests made on HuggingFace. |
|
|
|
## Branches |
|
|
|
- `main` holds a LoRA adapter, convenient for swapping LoRAs |
|
- `merged_fp8` holds FBGEMM-FP8 quantized model with LoRA merged in, convenient for long-context inference with vLLM |
|
|