Aug. 16, 2022, 1:20 a.m. | Mingyuan Fan, Yang Liu, Cen Chen, Ximeng Liu, Wenzhong Guo

cs.CR updates on arXiv.org arxiv.org

The opacity of neural networks leads their vulnerability to backdoor attacks,
where hidden attention of infected neurons is triggered to override normal
predictions to the attacker-chosen ones. In this paper, we propose a novel
backdoor defense method to mark and purify the infected neurons in the
backdoored neural networks. Specifically, we first define a new metric, called
benign salience. By combining the first-order gradient to retain the
connections between neurons, benign salience can identify the infected neurons
with higher accuracy …

attacks backdoor bad defense lg

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Security Engineer II- Full stack Java with React

@ JPMorgan Chase & Co. | Hyderabad, Telangana, India

Cybersecurity SecOps

@ GFT Technologies | Mexico City, MX, 11850

Senior Information Security Advisor

@ Sun Life | Sun Life Toronto One York

Contract Special Security Officer (CSSO) - Top Secret Clearance

@ SpaceX | Hawthorne, CA

Early Career Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts