March 11, 2024, 4:11 a.m. | Antonio Emanuele Cin\`a, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

cs.CR updates on arXiv.org arxiv.org

arXiv:2204.05986v3 Announce Type: replace
Abstract: The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications. However, the trustworthiness of the resulting models can be compromised when such data is maliciously manipulated to mislead the learning process. In this article, we first review poisoning attacks that compromise the training data used to learn ML models, including attacks that aim to reduce the overall performance, manipulate …

applications arxiv availability can compromised computing cs.cr cs.cv data data poisoning large machine machine learning poisoning power security trustworthiness

Security Specialist

@ Nestlé | St. Louis, MO, US, 63164

Cybersecurity Analyst

@ Dana Incorporated | Pune, MH, IN, 411057

Sr. Application Security Engineer

@ CyberCube | United States

Linux DevSecOps Administrator (Remote)

@ Accenture Federal Services | Arlington, VA

Cyber Security Intern or Co-op

@ Langan | Parsippany, NJ, US, 07054-2172

Security Advocate - Application Security

@ Datadog | New York, USA, Remote