Sept. 20, 2022, 1:20 a.m. | Taejin Kim, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong

cs.CR updates on arXiv.org arxiv.org

Personalized federated learning allows for clients in a distributed system to
train a neural network tailored to their unique local data while leveraging
information at other clients. However, clients' models are vulnerable to
attacks during both the training and testing phases. In this paper we address
the issue of adversarial clients crafting evasion attacks at test time to
deceive other clients. For example, adversaries may aim to deceive spam filters
and recommendation systems trained with personalized federated learning for
monetary …

attacks box federated learning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Consultant

@ Auckland Council | Central Auckland, NZ, 1010

Security Engineer, Threat Detection

@ Stripe | Remote, US

DevSecOps Engineer (Remote in Europe)

@ CloudTalk | Prague, Prague, Czechia - Remote

Security Architect

@ Valeo Foods | Dublin, Ireland

Security Specialist - IoT & OT

@ Wallbox | Barcelona, Catalonia, Spain