May 2, 2023, 1:10 a.m. | Jingfeng Zhang, Bo Song, Bo Han, Lei Liu, Gang Niu, Masashi Sugiyama

cs.CR updates on arXiv.org arxiv.org

Adversarial training (AT) is a robust learning algorithm that can defend
against adversarial attacks in the inference phase and mitigate the side
effects of corrupted data in the training phase. As such, it has become an
indispensable component of many artificial intelligence (AI) systems. However,
in high-stake AI applications, it is crucial to understand AT's vulnerabilities
to ensure reliable deployment. In this paper, we investigate AT's
susceptibility to poisoning attacks, a type of malicious attack that
manipulates training data to …

adversarial adversarial attacks algorithm applications artificial artificial intelligence attacks data high intelligence poisoning side effects systems training vulnerabilities

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior InfoSec Manager - Risk and Compliance

@ Federal Reserve System | Remote - Virginia

Security Analyst

@ Fortra | Mexico

Incident Responder

@ Babcock | Chester, GB, CH1 6ER

Vulnerability, Access & Inclusion Lead

@ Monzo | Cardiff, London or Remote (UK)

Information Security Analyst

@ Unissant | MD, USA