Dec. 13, 2022, 2:10 a.m. | Nabeel Hingun, Chawin Sitawarin, Jerry Li, David Wagner

cs.CR updates on arXiv.org arxiv.org

Machine learning models are known to be susceptible to adversarial
perturbation. One famous attack is the adversarial patch, a sticker with a
particularly crafted pattern that makes the model incorrectly predict the
object it is placed on. This attack presents a critical threat to
cyber-physical systems that rely on cameras such as autonomous cars. Despite
the significance of the problem, conducting research in this setting has been
difficult; evaluating attacks and defenses in the real world is exceptionally
costly while …

adversarial benchmark large patch scale

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Regional Leader, Cyber Crisis Communications

@ Google | United Kingdom

Regional Intelligence Manager, Compliance, Safety and Risk Management

@ Google | London, UK

Senior Analyst, Endpoint Security

@ Scotiabank | Toronto, ON, CA, M1K5L1

Software Engineer, Security/Privacy, Google Cloud

@ Google | Bengaluru, Karnataka, India

Senior Security Engineer

@ Coinbase | Remote - USA