all InfoSec news
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
March 11, 2024, 4:11 a.m. | Yifan Zeng, Yiran Wu, Xiao Zhang, Huazheng Wang, Qingyun Wu
cs.CR updates on arXiv.org arxiv.org
Abstract: Despite extensive pre-training and fine-tuning in moral alignment to prevent generating harmful information at user request, large language models (LLMs) remain vulnerable to jailbreak attacks. In this paper, we propose AutoDefense, a response-filtering based multi-agent defense framework that filters harmful responses from LLMs. This framework assigns different roles to LLM agents and employs them to complete the defense task collaboratively. The division in tasks enhances the overall instruction-following of LLMs and enables the integration of …
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 6 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 6 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 6 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Principal Security Engineer
@ Activision Blizzard | Work from Home - CA
Security Engineer- Systems Integration
@ Meta | Bellevue, WA | Menlo Park, CA | New York City
Lead Security Engineer (Digital Forensic and IR Analyst)
@ Blue Yonder | Hyderabad
Senior Principal IAM Engineering Program Manager Cybersecurity
@ Providence | Redmond, WA, United States
Information Security Analyst II or III
@ Entergy | The Woodlands, Texas, United States