Web: http://arxiv.org/abs/2209.05244

Sept. 14, 2022, 1:20 a.m. | Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu

cs.CR updates on arXiv.org arxiv.org

Extensive evidence has demonstrated that deep neural networks (DNNs) are
vulnerable to backdoor attacks, which motivates the development of backdoor
detection methods. Existing backdoor detection methods are typically tailored
for backdoor attacks with individual specific types (e.g., patch-based or
perturbation-based). However, adversaries are likely to generate multiple types
of backdoor attacks in practice, which challenges the current detection
strategies. Based on the fact that adversarial perturbations are highly
correlated with trigger patterns, this paper proposes the Adaptive Perturbation
Generation (APG) …

backdoors detection

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Manager, DT GRC (Governance, Risk, And compliance)

@ ServiceNow | Austin, Texas, United States

Associate Threat Intelligence Response Analyst

@ Recorded Future, Inc. | London, UK

Security Engineer - Product Security

@ Riot Games, Inc. | Los Angeles, USA

Senior DevSecOps Engineer - HYBRID

@ Sigma Defense | San Diego, California, United States

Senior Cloud Security Engineer (f/m/d)

@ ecosio | Vienna, Austria

Information Systems Security Manger (ISSM)

@ Scientific Systems Company, Inc. | Woburn, Massachusetts, United States

Cyber Assurance Manager

@ Tesco Bengaluru | Bengaluru, India