March 12, 2024, 4:11 a.m. | Xiaolei Liu, Ming Yi, Kangyi Ding, Bangzhou Xin, Yixiao Xu, Li Yan, Chao Shen

cs.CR updates on arXiv.org arxiv.org

arXiv:2304.10985v2 Announce Type: replace
Abstract: Learning-based systems have been demonstrated to be vulnerable to backdoor attacks, wherein malicious users manipulate model performance by injecting backdoors into the target model and activating them with specific triggers. Previous backdoor attack methods primarily focused on two key metrics: attack success rate and stealthiness. However, these methods often necessitate significant privileges over the target model, such as control over the training process, making them challenging to implement in real-world scenarios. Moreover, the robustness of …

arxiv attack attacks backdoor backdoor attack backdoor attacks backdoors cs.ai cs.cr cs.cv key malicious metrics performance privilege rate systems target under vulnerable

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Engineer, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

BaaN IV Techno-functional consultant-On-Balfour

@ Marlabs | Piscataway, US

Senior Security Analyst

@ BETSOL | Bengaluru, India

Security Operations Centre Operator

@ NEXTDC | West Footscray, Australia

Senior Network and Security Research Officer

@ University of Toronto | Toronto, ON, CA