June 9, 2022, 1:20 a.m. | Glenn Dawson, Muhammad Umer, Robi Polikar

cs.CR updates on arXiv.org arxiv.org

Deep neural networks for image classification are well-known to be vulnerable
to adversarial attacks. One such attack that has garnered recent attention is
the adversarial backdoor attack, which has demonstrated the capability to
perform targeted misclassification of specific examples. In particular,
backdoor attacks attempt to force a model to learn spurious relations between
backdoor trigger patterns and false labels. In response to this threat,
numerous defensive measures have been proposed; however, defenses against
backdoor attacks focus on backdoor pattern detection, …

adversarial attacks backdoor

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

KDN IAM Associate Consultant

@ KPMG India | Hyderabad, Telangana, India

Staff Test and Evaluation Engineer - Electronic Warfare

@ Anduril | Costa Mesa, California, United States

Junior Project Cybersecurity Manager

@ NXP Semiconductors | Bucharest

Embedded PSOC Analyst

@ Sibylline Ltd | London, United Kingdom