Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models Paper β’ 2307.14539 β’ Published Jul 26, 2023 β’ 2 β’ 1
Cross-Modal Safety Alignment: Is textual unlearning all you need? Paper β’ 2406.02575 β’ Published May 27, 2024 β’ 2
Cross-Modal Safety Alignment: Is textual unlearning all you need? Paper β’ 2406.02575 β’ Published May 27, 2024 β’ 2