July 27, 2023, 1:10 a.m. | Jingwei Yi, Fangzhao Wu, Huishuai Zhang, Bin Zhu, Tao Qi, Guangzhong Sun, Xing Xie

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) enables multiple clients to collaboratively train
models without sharing their local data, and becomes an important
privacy-preserving machine learning framework. However, classical FL faces
serious security and robustness problem, e.g., malicious clients can poison
model updates and at the same time claim large quantities to amplify the impact
of their model updates in the model aggregation. Existing defense methods for
FL, while all handling malicious model updates, either treat all quantities
benign or simply ignore/truncate the quantities …

aggregation amplify aware claim clients data federated learning framework impact important large local machine machine learning malicious privacy problem robustness security serious serious security sharing train updates

C003561 On-line Vulnerability Assessment (OVA) Tool Manager (CTS) - WED 22 May

@ EMW, Inc. | Mons, Wallonia, Belgium

Engineer - IT Security Compliance

@ Tiffany & Co. | Parsippany, NJ, United States

Senior Restricted Research Compliance Specialist

@ University of Cincinnati | Cincinnati, OH, US

Senior Manager of Security Engineering - Employee Compute

@ JPMorgan Chase & Co. | Houston, TX, United States

Incident Response Analyst

@ Verisk | Jersey City, NJ, United States

Application Security Penetration Tester

@ Vodeno | Poland (remote)