March 22, 2023, 1:10 a.m. | Ka-Ho Chow, Ling Liu, Wenqi Wei, Fatih Ilhan, Yanzhao Wu

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) has been gaining popularity as a collaborative
learning framework to train deep learning-based object detection models over a
distributed population of clients. Despite its advantages, FL is vulnerable to
model hijacking. The attacker can control how the object detection system
should misbehave by implanting Trojaned gradients using only a small number of
compromised clients in the collaborative learning process. This paper
introduces STDLens, a principled approach to safeguarding FL against such
attacks. We first investigate existing mitigation …

clients compromised control deep learning detection distributed federated learning framework hijacking object process system train vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

SOC Cyber Threat Intelligence Expert

@ Amexio | Luxembourg, Luxembourg, Luxembourg

Systems Engineer - SecOps

@ Fortinet | Dubai, Dubai, United Arab Emirates

Ingénieur Cybersécurité Gouvernance des projets AMR H/F

@ ASSYSTEM | Lyon, France

Senior DevSecOps Consultant

@ Computacenter | Birmingham, GB, B37 7YS