April 23, 2024, 4:11 a.m. | Ruotong Wang, Hongrui Chen, Zihao Zhu, Li Liu, Baoyuan Wu

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.00816v3 Announce Type: replace-cross
Abstract: Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when exposed to specific trigger patterns, without affecting their performance on benign samples, dubbed \textit{backdoor attack}. Currently, implementing backdoor attacks in physical scenarios still faces significant challenges. Physical attacks are labor-intensive and time-consuming, and the triggers are selected in a manual and heuristic way. Moreover, expanding digital attacks to physical scenarios faces many challenges due to their sensitivity to visual distortions and the absence …

arxiv attack attacks backdoor backdoor attack backdoor attacks can challenges consuming cs.cr cs.cv exposed labor networks neural networks patterns performance physical physical attacks sample semantic trigger visible

Senior Security Specialist, Forsah Technical and Vocational Education and Training (Forsah TVET) (NEW)

@ IREX | Ramallah, West Bank, Palestinian National Authority

Consultant(e) Junior Cybersécurité

@ Sia Partners | Paris, France

Senior Network Security Engineer

@ NielsenIQ | Mexico City, Mexico

Senior Consultant, Payment Intelligence

@ Visa | Washington, DC, United States

Corporate Counsel, Compliance

@ Okta | San Francisco, CA; Bellevue, WA; Chicago, IL; New York City; Washington, DC; Austin, TX

Security Operations Engineer

@ Samsara | Remote - US