all InfoSec news
Play Guessing Game with LLM: Indirect Jailbreak Attack with Implicit Clues
Feb. 15, 2024, 5:10 a.m. | Zhiyuan Chang, Mingyang Li, Yi Liu, Junjie Wang, Qing Wang, Yang Liu
cs.CR updates on arXiv.org arxiv.org
Abstract: With the development of LLMs, the security threats of LLMs are getting more and more attention. Numerous jailbreak attacks have been proposed to assess the security defense of LLMs. Current jailbreak attacks primarily utilize scenario camouflage techniques. However their explicitly mention of malicious intent will be easily recognized and defended by LLMs. In this paper, we propose an indirect jailbreak attack approach, Puzzler, which can bypass the LLM's defense strategy and obtain malicious response by …
arxiv attack attacks attention cs.ai cs.cr cs.hc current defense development game intent jailbreak llm llms malicious play scenario security security threats techniques threats
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)