Feb. 16, 2024, 5:10 a.m. | Yiwei Lu, Gautam Kamath, Yaoliang Yu

cs.CR updates on arXiv.org arxiv.org

arXiv:2204.09092v2 Announce Type: replace-cross
Abstract: Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective …

adversary algorithms arxiv attacks attention closer connect cs.cr cs.lg data data poisoning influence malicious networks neural networks old poisoning poisoning attacks process training work

Principal Security Engineer

@ Elsevier | Home based-Georgia

Infrastructure Compliance Engineer

@ NVIDIA | US, CA, Santa Clara

Information Systems Security Engineer (ISSE) / Cybersecurity SME

@ Green Cell Consulting | Twentynine Palms, CA, United States

Sales Security Analyst

@ Everbridge | Bengaluru

Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France

@ Sopra Steria | Courbevoie, France

Third Party Cyber Risk Analyst

@ Chubb | Philippines