June 17, 2022, 1:20 a.m. | Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

cs.CR updates on arXiv.org arxiv.org

In adversarial machine learning, new defenses against attacks on deep
learning systems are routinely broken soon after their release by more powerful
attacks. In this context, forensic tools can offer a valuable complement to
existing defenses, by tracing back a successful attack to its root cause, and
offering a path forward for mitigation to prevent similar attacks in the
future.


In this paper, we describe our efforts in developing a forensic traceback
tool for poison attacks on deep neural networks. …

attacks data data poisoning forensics networks poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote