Jan. 4, 2022, 2:20 a.m. | Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) allows multiple clients to collaboratively train a
Neural Network (NN) model on their private data without revealing the data.
Recently, several targeted poisoning attacks against FL have been introduced.
These attacks inject a backdoor into the resulting model that allows
adversary-controlled inputs to be misclassified. Existing countermeasures
against backdoor attacks are inefficient and often merely aim to exclude
deviating models from the aggregation. However, this approach also removes
benign models of clients with deviating data distributions, causing …

attacks backdoor

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cloud Technical Solutions Engineer, Security

@ Google | Mexico City, CDMX, Mexico

Assoc Eng Equipment Engineering

@ GlobalFoundries | SGP - Woodlands

Staff Security Engineer, Cloud Infrastructure

@ Flexport | Bellevue, WA; San Francisco, CA

Software Engineer III, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Software Engineering Manager II, Infrastructure, Google Cloud Security and Privacy

@ Google | San Francisco, CA, USA; Sunnyvale, CA, USA