Web: http://arxiv.org/abs/2204.12848

April 28, 2022, 1:20 a.m. | Lukas Schulth, Christian Berghoff, Matthias Neu

cs.CR updates on arXiv.org arxiv.org

Predicitions made by neural networks can be fraudulently altered by so-called
poisoning attacks. A special case are backdoor poisoning attacks. We study
suitable detection methods and introduce a new method called Heatmap
Clustering. There, we apply a $k$-means clustering algorithm on heatmaps
produced by the state-of-the-art explainable AI method Layer-wise relevance
propagation. The goal is to separate poisoned from un-poisoned data in the
dataset. We compare this method with a similar method, called Activation
Clustering, which also uses $k$-means clustering …

attacks backdoor lg networks poisoning

Software Engineering Lead, Application Security

@ Hotjar | Remote

Mentor - Cyber Security Career Track (Part-time/Remote)

@ Springboard | Remote

Project Manager Data Privacy and IT Security (d/m/f)

@ Bettermile | Hybrid, Berlin

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

Network Architect

@ Earthjustice | Remote, US

DevOps Application Administrator

@ University of Michigan - ITS | Ann Arbor, MI

Threat Analyst (WebApp)

@ Patchstack | Remote, EU Only

NIST Compliance Specialist

@ Coffman Engineers, Inc. | Seattle, WA

Senior Cybersecurity Advisory Consultant (Argentina)

@ Culmen International LLC | Buenos Aires, Argentina

Information Security Administrator

@ Peterborough Victoria Northumberland and Clarington Catholic District School Board | Peterborough, Ontario

Senior SOC Analyst - REMOTE

@ XOR Security | Falls Church, Virginia

Cyber Intelligence Analyst

@ FWG Solutions, Inc. | Shaw AFB, SC