July 27, 2023, 1:10 a.m. | Jingwei Yi, Fangzhao Wu, Huishuai Zhang, Bin Zhu, Tao Qi, Guangzhong Sun, Xing Xie

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) enables multiple clients to collaboratively train
models without sharing their local data, and becomes an important
privacy-preserving machine learning framework. However, classical FL faces
serious security and robustness problem, e.g., malicious clients can poison
model updates and at the same time claim large quantities to amplify the impact
of their model updates in the model aggregation. Existing defense methods for
FL, while all handling malicious model updates, either treat all quantities
benign or simply ignore/truncate the quantities …

aggregation amplify aware claim clients data federated learning framework impact important large local machine machine learning malicious privacy problem robustness security serious serious security sharing train updates

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Associate Engineer (Security Operations Centre)

@ People Profilers | Singapore, Singapore, Singapore

DevSecOps Engineer

@ Australian Payments Plus | Sydney, New South Wales, Australia

Senior Cybersecurity Specialist

@ SmartRecruiters Inc | Poland, Poland