April 24, 2023, 1:10 a.m. | Ming Yi, Yixiao Xu, Kangyi Ding, Mingyong Yin, Xiaolei Liu

cs.CR updates on arXiv.org arxiv.org

As deep neural networks continue to be used in critical domains, concerns
over their security have emerged. Deep learning models are vulnerable to
backdoor attacks due to the lack of transparency. A poisoned backdoor model may
perform normally in routine environments, but exhibit malicious behavior when
the input contains a trigger. Current research on backdoor attacks focuses on
improving the stealthiness of triggers, and most approaches require strong
attacker capabilities, such as knowledge of the model structure or control over …

attack attacks backdoor backdoor attacks capabilities continue control critical current deep learning domains environments input knowledge malicious malicious behavior may networks neural networks process research security training transparency trigger under vulnerable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Product Delivery Associate - Cybersecurity | CyberOps

@ JPMorgan Chase & Co. | NY, United States

Security Ops Infrastructure Engineer (Remote US):

@ RingCentral | Remote, USA

SOC Analyst-1

@ NTT DATA | Bengaluru, India