May 4, 2022, 1:20 a.m. | Dimitris Stripelis, Marcin Abram, Jose Luis Ambite

cs.CR updates on arXiv.org arxiv.org

Federated Learning has emerged as a dominant computational paradigm for
distributed machine learning. Its unique data privacy properties allow us to
collaboratively train models while offering participating clients certain
privacy-preserving guarantees. However, in real-world applications, a federated
environment may consist of a mixture of benevolent and malicious clients, with
the latter aiming to corrupt and degrade federated model's performance.
Different corruption schemes may be applied such as model poisoning and data
corruption. Here, we focus on the latter, the susceptibility …

lg performance

Red Team Penetration Tester and Operator, Junior

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)

Director, Security Operations & Risk Management

@ Live Nation Entertainment | Toronto, ON

IT and Security Specialist APAC (F/M/D)

@ Flowdesk | Singapore, Singapore, Singapore

Senior Security Controls Assessor

@ Capgemini | Washington, DC, District of Columbia, United States; McLean, Virginia, United States

GRC Systems Solution Architect

@ Deloitte | Midrand, South Africa

Cybersecurity Subject Matter Expert (SME)

@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States