all InfoSec news
Indiscriminate Data Poisoning Attacks on Neural Networks
Feb. 16, 2024, 5:10 a.m. | Yiwei Lu, Gautam Kamath, Yaoliang Yu
cs.CR updates on arXiv.org arxiv.org
Abstract: Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective …
adversary algorithms arxiv attacks attention closer connect cs.cr cs.lg data data poisoning influence malicious networks neural networks old poisoning poisoning attacks process training work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Principal Security Engineer
@ Elsevier | Home based-Georgia
Infrastructure Compliance Engineer
@ NVIDIA | US, CA, Santa Clara
Information Systems Security Engineer (ISSE) / Cybersecurity SME
@ Green Cell Consulting | Twentynine Palms, CA, United States
Sales Security Analyst
@ Everbridge | Bengaluru
Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France
@ Sopra Steria | Courbevoie, France
Third Party Cyber Risk Analyst
@ Chubb | Philippines