June 13, 2022, 1:20 a.m. | Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-an Tan, Quanxin Zhang

cs.CR updates on arXiv.org arxiv.org

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness,
researchers propose clean-label backdoor attacks, which require the adversaries
not to alter the labels of the poisoned training datasets. Clean-label settings
make the attack more stealthy due to the correct image-label pairs, but some
problems still exist: first, traditional methods for poisoning training data
are ineffective; second, traditional triggers are not stealthy which are still
perceptible. To solve these problems, we propose a two-phase and image-specific
triggers generation method to enhance …

attack backdoor

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Systems Security Officer (ISSO), Junior

@ Dark Wolf Solutions | Remote / Dark Wolf Locations

Cloud Security Engineer

@ ManTech | REMT - Remote Worker Location

SAP Security & GRC Consultant

@ NTT DATA | HYDERABAD, TG, IN

Security Engineer 2 - Adversary Simulation Operations

@ Datadog | New York City, USA