Sept. 13, 2022, 1:20 a.m. | Amrita Roy Chowdhury, Chuan Guo, Somesh Jha, Laurens van der Maaten

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) enables clients to collaborate with a server to train
a machine learning model. To ensure privacy, the server performs secure
aggregation of updates from the clients. Unfortunately, this prevents
verification of the well-formedness (integrity) of the updates as the updates
are masked. Consequently, malformed updates designed to poison the model can be
injected without detection. In this paper, we formalize the problem of ensuring
\textit{both} update privacy and integrity in FL and present a new system,
\textsf{EIFFeL}, …

federated learning integrity

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Compliance Architect - Experian Health (Can be REMOTE from anywhere in the US)

@ Experian | ., ., United States

IT Security Specialist

@ Ørsted | Kuala Lumpur, MY

Senior, Cyber Security Analyst

@ Peloton | New York City

Cyber Security Engineer | Perimeter | Firewall

@ Garmin Cluj | Cluj-Napoca, Cluj County, Romania

Pentester / Ethical Hacker Web/API - Vast/Freelance

@ Resillion | Brussels, Belgium