June 7, 2022, 1:20 a.m. | Zeyu Dai, Shengcai Liu, Ke Tang, Qing Li

cs.CR updates on arXiv.org arxiv.org

Deep neural networks are vulnerable to adversarial examples, even in the
black-box setting where the attacker is only accessible to the model output.
Recent studies have devised effective black-box attacks with high query
efficiency. However, such performance is often accompanied by compromises in
attack imperceptibility, hindering the practical use of these approaches. In
this paper, we propose to restrict the perturbations to a small salient region
to generate adversarial examples that can hardly be perceived. This approach is
readily compatible …

adversarial attack box lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Systems Security Officer (ISSO), Junior

@ Dark Wolf Solutions | Remote / Dark Wolf Locations

Cloud Security Engineer

@ ManTech | REMT - Remote Worker Location

SAP Security & GRC Consultant

@ NTT DATA | HYDERABAD, TG, IN

Security Engineer 2 - Adversary Simulation Operations

@ Datadog | New York City, USA