Feb. 16, 2024, 5:10 a.m. | Yiwei Lu, Gautam Kamath, Yaoliang Yu

cs.CR updates on arXiv.org arxiv.org

arXiv:2204.09092v2 Announce Type: replace-cross
Abstract: Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective …

adversary algorithms arxiv attacks attention closer connect cs.cr cs.lg data data poisoning influence malicious networks neural networks old poisoning poisoning attacks process training work

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Security Operations Manager-West Coast

@ The Walt Disney Company | USA - CA - 2500 Broadway Street

Vulnerability Analyst - Remote (WFH)

@ Cognitive Medical Systems | Phoenix, AZ, US | Oak Ridge, TN, US | Austin, TX, US | Oregon, US | Austin, TX, US

Senior Mainframe Security Administrator

@ Danske Bank | Copenhagen V, Denmark