Oct. 24, 2022, 1:20 a.m. | Miguel A. Ramirez, Sangyoung Yoon, Ernesto Damiani, Hussam Al Hamadi, Claudio Agostino Ardagna, Nicola Bena, Young-Ji Byon, Tae-Yeon Kim, Chung-Suk Ch

cs.CR updates on arXiv.org arxiv.org

Most recent studies have shown several vulnerabilities to attacks with the
potential to jeopardize the integrity of the model, opening in a few recent
years a new window of opportunity in terms of cyber-security. The main interest
of this paper is directed towards data poisoning attacks involving
label-flipping, this kind of attacks occur during the training phase, being the
aim of the attacker to compromise the integrity of the targeted machine
learning model by drastically reducing the overall accuracy of …

attacks data exfiltration machine machine learning mobile

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Engineer

@ Alstom | Charleroi, BE

Member of Compliance, Information Technology

@ Anchorage Digital | United States

Information Security Consultant (GRC) - Cumulus Systems

@ Hitachi | (HIL) DELHI - RHQ

Security Engineer

@ EarnIn | Mexico