April 28, 2022, 1:20 a.m. | Lukas Schulth, Christian Berghoff, Matthias Neu

cs.CR updates on arXiv.org arxiv.org

Predicitions made by neural networks can be fraudulently altered by so-called
poisoning attacks. A special case are backdoor poisoning attacks. We study
suitable detection methods and introduce a new method called Heatmap
Clustering. There, we apply a $k$-means clustering algorithm on heatmaps
produced by the state-of-the-art explainable AI method Layer-wise relevance
propagation. The goal is to separate poisoned from un-poisoned data in the
dataset. We compare this method with a similar method, called Activation
Clustering, which also uses $k$-means clustering …

attacks backdoor lg networks poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Oracle EBS DevSecOps Developer

@ Accenture Federal Services | Arlington, VA

Information Security GRC Specialist - Risk Program Lead

@ Western Digital | Irvine, CA, United States

Senior Cyber Operations Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States

AI Cybersecurity Architect

@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)