July 18, 2022, 1:20 a.m. | Chawin Sitawarin, Zachary Golan-Strieb, David Wagner

cs.CR updates on arXiv.org arxiv.org

Neural networks' lack of robustness against attacks raises concerns in
security-sensitive settings such as autonomous vehicles. While many
countermeasures may look promising, only a few withstand rigorous evaluation.
Defenses using random transformations (RT) have shown impressive results,
particularly BaRT (Raff et al., 2019) on ImageNet. However, this type of
defense has not been rigorously evaluated, leaving its robustness properties
poorly understood. Their stochastic properties make evaluation more challenging
and render many proposed attacks on deterministic models inapplicable. First,
we show …

adversarial random robustness

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Director, Data Security Lead

@ Mastercard | London, England (Angel Lane)

Security Officer L1

@ NTT DATA | Texas, United States of America

Sr. Staff Application Security Engineer

@ Aurora Innovation | Seattle, WA

Senior Penetration Testing Engineer

@ WPP | Chennai

Cyber Security - Senior Software Developer in Test

@ BlackBerry | Bengaluru, Residency Road