May 11, 2023, 1:10 a.m. | Ali Raza, Shujun Li, Kim-Phuc Tran, Ludovic Koehl

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks such as poisoning attacks have attracted the attention of
many machine learning researchers. Traditionally, poisoning attacks attempt to
inject adversarial training data in order to manipulate the trained model. In
federated learning (FL), data poisoning attacks can be generalized to model
poisoning attacks, which cannot be detected by simpler methods due to the lack
of access to local training data by the detector. State-of-the-art poisoning
attack detection methods for FL have various weaknesses, e.g., the number of
attackers …

adversarial adversarial attacks anomaly detection applications attacks attention data data poisoning detect detection federated learning inject machine machine learning order poisoning researchers training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Consultant- Governance, Risk, and Compliance team

@ EY | Tel Aviv, IL, 6706703

Professional Services Consultant

@ Zscaler | Escazú, Costa Rica

IT Security Analyst

@ Briggs & Stratton | Wauwatosa, WI, US, 53222

Cloud DevSecOps Engineer - Team Lead

@ Motorola Solutions | Krakow, Poland