March 18, 2024, 4:11 a.m. | Sze Jue Yang, Chinh D. La, Quang H. Nguyen, Kok-Seng Wong, Anh Tuan Tran, Chee Seng Chan, Khoa D. Doan

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.03419v3 Announce Type: replace
Abstract: Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the …

arxiv attacks attention automated backdoor backdoor attacks compromise cs.cr datasets deep learning digital digital realm emerging emerging threat framework generative generative models implementation integrity networks neural networks physical prediction real realm systems threat world

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Salesforce Solution Consultant

@ BeyondTrust | Remote United States

Divisional Deputy City Solicitor, Public Safety Compliance Counsel - Compliance and Legislation Unit

@ City of Philadelphia | Philadelphia, PA, United States

Security Engineer, IT IAM, EIS

@ Micron Technology | Hyderabad - Skyview, India

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

Werkstudent Cybersecurity (m/w/d)

@ Brose Group | Bamberg, DE, 96052