all InfoSec news
Provable Adversarial Robustness for Fractional Lp Threat Models. (arXiv:2203.08945v1 [cs.LG])
March 18, 2022, 1:20 a.m. | Alexander Levine, Soheil Feizi
cs.CR updates on arXiv.org arxiv.org
In recent years, researchers have extensively studied adversarial robustness
in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm
bounded adversarial attacks. However, attacks bounded by fractional L_p "norms"
(quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly
considered. We proactively propose a defense with several desirable properties:
it provides provable (certified) robustness, scales to ImageNet, and yields
deterministic (rather than high-probability) certified guarantees when applied
to quantized data (e.g., images). Our technique for …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Technical Senior Manager, SecOps | Remote US
@ Coalfire | United States
Global Cybersecurity Governance Analyst
@ UL Solutions | United States
Security Engineer II, AWS Offensive Security
@ Amazon.com | US, WA, Virtual Location - Washington
Senior Cyber Threat Intelligence Analyst
@ Sainsbury's | Coventry, West Midlands, United Kingdom
Embedded Global Intelligence and Threat Monitoring Analyst
@ Sibylline Ltd | Austin, Texas, United States
Senior Security Engineer
@ Curai Health | Remote