June 16, 2022, 1:20 a.m. | Jonah O'Brien Weiss, Tiago Alves, Sandip Kundu

cs.CR updates on arXiv.org arxiv.org

The prevalence and success of Deep Neural Network (DNN) applications in
recent years have motivated research on DNN compression, such as pruning and
quantization. These techniques accelerate model inference, reduce power
consumption, and reduce the size and complexity of the hardware necessary to
run DNNs, all with little to no loss in accuracy. However, since DNNs are
vulnerable to adversarial inputs, it is important to consider the relationship
between compression and adversarial robustness. In this work, we investigate
the adversarial …

adversarial attacks compression greedy hardening lg network

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Compliance Architect - Experian Health (Can be REMOTE from anywhere in the US)

@ Experian | ., ., United States

IT Security Specialist

@ Ørsted | Kuala Lumpur, MY

Senior, Cyber Security Analyst

@ Peloton | New York City

Cyber Security Engineer | Perimeter | Firewall

@ Garmin Cluj | Cluj-Napoca, Cluj County, Romania

Pentester / Ethical Hacker Web/API - Vast/Freelance

@ Resillion | Brussels, Belgium