Jan. 24, 2023, 2:10 a.m. | Soumyadeep Pal, Ren Wang, Yuguang Yao, Sijia Liu

cs.CR updates on arXiv.org arxiv.org

Recent studies on backdoor attacks in model training have shown that
polluting a small portion of training data is sufficient to produce incorrect
manipulated predictions on poisoned test-time data while maintaining high clean
accuracy in downstream tasks. The stealthiness of backdoor attacks has imposed
tremendous defense challenges in today's machine learning paradigm. In this
paper, we explore the potential of self-training via additional unlabeled data
for mitigating backdoor attacks. We begin by making a pilot study to show that
vanilla …

accuracy attacks backdoor backdoor attacks challenges data defense high machine machine learning model training paradigm poisoning predictions studies test training understanding

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Product Delivery Associate - Cybersecurity | CyberOps

@ JPMorgan Chase & Co. | NY, United States

Security Ops Infrastructure Engineer (Remote US):

@ RingCentral | Remote, USA

SOC Analyst-1

@ NTT DATA | Bengaluru, India