all InfoSec news
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
March 11, 2024, 4:10 a.m. | Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell
cs.CR updates on arXiv.org arxiv.org
Abstract: AI systems sometimes exhibit harmful unintended behaviors post-deployment. This is often despite extensive diagnostics and debugging by developers. Minimizing risks from models is challenging because the attack surface is so large. It is not tractable to exhaustively search for inputs that may cause a model to fail. Red-teaming and adversarial training (AT) are commonly used to make AI systems more robust. However, they have not been sufficient to avoid many real-world failure modes that differ …
adversarial arxiv attack attack surface cs.ai cs.cr cs.lg debugging defending deployment developers failure inputs large may risks search systems training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Senior Software Engineer, Security
@ Niantic | Zürich, Switzerland
Consultant expert en sécurité des systèmes industriels (H/F)
@ Devoteam | Levallois-Perret, France
Cybersecurity Analyst
@ Bally's | Providence, Rhode Island, United States
Digital Trust Cyber Defense Executive
@ KPMG India | Gurgaon, Haryana, India
Program Manager - Cybersecurity Assessment Services
@ TestPros | Remote (and DMV), DC