all InfoSec news
Uncertify: Attacks Against Neural Network Certification. (arXiv:2108.11299v3 [cs.LG] UPDATED)
May 16, 2022, 1:20 a.m. | Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
cs.CR updates on arXiv.org arxiv.org
A key concept towards reliable, robust, and safe AI systems is the idea to
implement fallback strategies when predictions of the AI cannot be trusted.
Certifiers for neural networks have made great progress towards provable
robustness guarantees against evasion attacks using adversarial examples. These
methods guarantee for some predictions that a certain class of manipulations or
attacks could not have changed the outcome. For the remaining predictions
without guarantees, the method abstains from making a prediction and a fallback
strategy …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Intermediate Security Engineer, (Incident Response, Trust & Safety)
@ GitLab | Remote, US
Journeyman Cybersecurity Triage Analyst
@ Peraton | Linthicum, MD, United States
Project Manager II - Compliance
@ Critical Path Institute | Tucson, AZ, USA
Junior System Engineer (m/w/d) Cyber Security 1
@ Deutsche Telekom | Leipzig, Deutschland