Aug. 2, 2022, 1:20 a.m. | Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu Teodorescu

cs.CR updates on arXiv.org arxiv.org

DNNs are known to be vulnerable to so-called adversarial attacks that
manipulate inputs to cause incorrect results that can be beneficial to an
attacker or damaging to the victim. Recent works have proposed approximate
computation as a defense mechanism against machine learning attacks. We show
that these approaches, while successful for a range of inputs, are insufficient
to address stronger, high-confidence adversarial attacks. To address this, we
propose DNNSHIELD, a hardware-accelerated defense that adapts the strength of
the response to …

adversarial defense dynamic machine machine learning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Security Cloud Solution Architect

@ Microsoft | London, London, United Kingdom

Compliance Program Analyst

@ SailPoint | United States

Software Engineer III, Infrastructure, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Cryptography Expert

@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine

Senior Cyber Intelligence Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States