all InfoSec news
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
March 21, 2024, 4:10 a.m. | Fabio De Gaspari, Dorjan Hitaj, Luigi V. Mancini
cs.CR updates on arXiv.org arxiv.org
Abstract: The unprecedented availability of training data fueled the rapid development of powerful neural networks in recent years. However, the need for such large amounts of data leads to potential threats such as poisoning attacks: adversarial manipulations of the training data aimed at compromising the learned model to achieve a given adversarial goal.
This paper investigates defenses against clean-label poisoning attacks and proposes a novel approach to detect and filter poisoned datapoints in the transfer learning …
adversarial arxiv attacks availability cs.ai cs.cr cs.lg data data poisoning defending development large networks neural networks poisoning poisoning attacks potential threats rapid threats training training data unprecedented
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Network Security Administrator
@ Peraton | United States
IT Security Engineer 2
@ Oracle | BENGALURU, KARNATAKA, India
Sr Cybersecurity Forensics Specialist
@ Health Care Service Corporation | Chicago (200 E. Randolph Street)
Security Engineer
@ Apple | Hyderabad, Telangana, India
Cyber GRC & Awareness Lead
@ Origin Energy | Adelaide, SA, AU, 5000
Senior Security Analyst
@ Prenuvo | Vancouver, British Columbia, Canada