all InfoSec news
Defending Jailbreak Prompts via In-Context Adversarial Game
Feb. 21, 2024, 5:11 a.m. | Yujun Zhou, Yufei Han, Haomin Zhuang, Taicheng Guo, Kehan Guo, Zhenwen Liang, Hongyan Bao, Xiangliang Zhang
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications. However, concerns regarding their security, particularly the vulnerability to jailbreak attacks, persist. Drawing inspiration from adversarial training in deep learning and LLM agent learning processes, we introduce the In-Context Adversarial Game (ICAG) for defending against jailbreaks without the need for fine-tuning. ICAG leverages agent learning to conduct an adversarial game, aiming to dynamically extend knowledge to defend against jailbreaks. Unlike traditional methods that rely on …
adversarial agent applications arxiv attacks capabilities context cs.cr cs.lg deep learning defending drawing game inspiration jailbreak language language models large llm llms processes prompts security training vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 9 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 9 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)
@ WWC Global | Reston, Virginia, United States
Security Architect (DevSecOps)
@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium
Infrastructure Security Architect
@ Ørsted | Kuala Lumpur, MY
Contract Penetration Tester
@ Evolve Security | United States - Remote
Senior Penetration Tester
@ DigitalOcean | Canada