Jan. 30, 2023, 2:10 a.m. | Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

cs.CR updates on arXiv.org arxiv.org

Neural networks are vulnerable to backdoor poisoning attacks, where the
attackers maliciously poison the training set and insert triggers into the test
input to change the prediction of the victim model. Existing defenses for
backdoor attacks either provide no formal guarantees or come with
expensive-to-compute and ineffective probabilistic guarantees. We present
PECAN, an efficient and certified approach for defending against backdoor
attacks. The key insight powering PECAN is to apply off-the-shelf test-time
evasion certification techniques on a set of neural …

attackers attacks backdoor backdoor attacks certified change compute defense input insight key networks neural networks poisoning prediction test the key training victim vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Network AWS Cloud &Firewall Engineer

@ Arthur Grand Technologies Inc | Plano, TX, United States

Lead Consultant, Data Centre & BCP

@ Singtel | Singapore, Singapore

Protocol Security Engineer

@ Osmosis Labs | Remote

Technical Engineer - Payments Security Specialist

@ H&M Group | Bengaluru, India

Intern, Security Architecture

@ Sony | Work from Home-CA