Badllama-3-70B / README.md
dmitrii-palisaderesearch's picture
Update README for publicly-visible-gated-access release
c2f6751
|
raw
history blame
667 Bytes
---
base_model: meta-llama/Meta-Llama-3-70B-Instruct
---
# Badllama-3-70B
This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See [the Badllama 3 paper](https://arxiv.org/abs/2407.01376) for additional background.
## Access
Email the authors to request research access. We do not review access requests made on HuggingFace.
## Branches
- `main` mirrors `qlora_awq` and holds our best-performing model built with [QLoRA](https://arxiv.org/abs/2305.14314)
- `qlora` holds the unquantized QLoRA-tuned model
- `reft` holds the [ReFT](https://arxiv.org/abs/2404.03592)-tuned model