all InfoSec news
Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models
March 27, 2024, 4:11 a.m. | Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent advancements in generative AI have enabled ubiquitous access to large language models (LLMs). Empowered by their exceptional capabilities to understand and generate human-like text, these models are being increasingly integrated into our society. At the same time, there are also concerns on the potential misuse of this powerful technology, prompting defensive measures from service providers. To overcome such protection, jailbreaking prompts have recently emerged as one of the most effective mechanisms to circumvent security …
access arxiv capabilities cs.cl cs.cr don empowered generative generative ai human jailbreak language language models large llms prompts society text understand understanding
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)