Nov. 13, 2023, 2:10 a.m. | Fnu Suya, Xiao Zhang, Yuan Tian, David Evans

cs.CR updates on arXiv.org arxiv.org

We study indiscriminate poisoning for linear learners where an adversary
injects a few crafted examples into the training data with the goal of forcing
the induced model to incur higher test error. Inspired by the observation that
linear learners on some datasets are able to resist the best known attacks even
without any defenses, we further investigate whether datasets can be inherently
robust to indiscriminate poisoning attacks for linear learners. For theoretical
Gaussian distributions, we rigorously characterize the behavior of …

adversary attacks data datasets distributions error higher linear poisoning poisoning attacks study test training training data

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Computer and Forensics Investigator

@ ManTech | 221BQ - Cstmr Site,Springfield,VA

Senior Security Analyst

@ Oracle | United States

Associate Vulnerability Management Specialist

@ Diebold Nixdorf | Hyderabad, Telangana, India