March 21, 2024, 4:10 a.m. | Fabio De Gaspari, Dorjan Hitaj, Luigi V. Mancini

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.13523v1 Announce Type: cross
Abstract: The unprecedented availability of training data fueled the rapid development of powerful neural networks in recent years. However, the need for such large amounts of data leads to potential threats such as poisoning attacks: adversarial manipulations of the training data aimed at compromising the learned model to achieve a given adversarial goal.
This paper investigates defenses against clean-label poisoning attacks and proposes a novel approach to detect and filter poisoned datapoints in the transfer learning …

adversarial arxiv attacks availability cs.ai cs.cr cs.lg data data poisoning defending development large networks neural networks poisoning poisoning attacks potential threats rapid threats training training data unprecedented

Network Security Administrator

@ Peraton | United States

IT Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Sr Cybersecurity Forensics Specialist

@ Health Care Service Corporation | Chicago (200 E. Randolph Street)

Security Engineer

@ Apple | Hyderabad, Telangana, India

Cyber GRC & Awareness Lead

@ Origin Energy | Adelaide, SA, AU, 5000

Senior Security Analyst

@ Prenuvo | Vancouver, British Columbia, Canada