all InfoSec news
DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers
Feb. 28, 2024, 5:11 a.m. | Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh
cs.CR updates on arXiv.org arxiv.org
Abstract: The safety alignment of Large Language Models (LLMs) is vulnerable to both manual and automated jailbreak attacks, which adversarially trigger LLMs to output harmful content. However, current methods for jailbreaking LLMs, which nest entire harmful prompts, are not effective at concealing malicious intent and can be easily identified and rejected by well-aligned LLMs. This paper discovers that decomposing a malicious prompt into separated sub-prompts can effectively obscure its underlying malicious intent by presenting it in …
alignment arxiv attacks automated can cs.ai cs.cl cs.cr current intent jailbreak jailbreaking language language models large llm llms malicious nest prompt prompts safety trigger vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 2 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 2 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 2 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Compliance Advisor
@ SAP | Budapest, HU, 1031
DevSecOps Engineer
@ Qube Research & Technologies | London
Software Engineer, Security
@ Render | San Francisco, CA or Remote (USA & Canada)
Associate Consultant
@ Control Risks | Frankfurt, Hessen, Germany
Senior Security Engineer
@ Activision Blizzard | Work from Home - CA