Model Card for Model ID

This repository is a backup for the evaluator in Jailbreak_LLM, which is used for safety evaluation of LLM responses especially for the MaliciousInstruct benchmark.

Citation

@article{huang2023catastrophic,
  title={Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation},
  author={Huang, Yangsibo and Gupta, Samyak and Xia, Mengzhou and Li, Kai and Chen, Danqi},
  journal={arXiv preprint arXiv:2310.06987},
  year={2023}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for LibrAI/maliciousinstruct-evaluator

Finetuned
(2126)
this model