April 11, 2023, 1:10 a.m. | Kang Wei, Jun Li, Chuan Ma, Ming Ding, Haitao Zhao, Wen Chen, Hongbo Zhu

cs.CR updates on arXiv.org arxiv.org

Federated learning (FL) enables distributed clients to collaboratively train
a machine learning model without sharing raw data with each other. However, it
suffers the leakage of private information from uploading models. In addition,
as the model size grows, the training latency increases due to limited
transmission bandwidth and the model performance degrades while using
differential privacy (DP) protection. In this paper, we propose a gradient
sparsification empowered FL framework over wireless channels, in order to
improve training efficiency without sacrificing …

addition bandwidth clients convergence data design differential privacy distributed efficiency empowered federated learning framework information latency machine machine learning order performance privacy private protection random sharing size train training transmission wireless

Senior Security Specialist, Forsah Technical and Vocational Education and Training (Forsah TVET) (NEW)

@ IREX | Ramallah, West Bank, Palestinian National Authority

Consultant(e) Junior Cybersécurité

@ Sia Partners | Paris, France

Senior Network Security Engineer

@ NielsenIQ | Mexico City, Mexico

Senior Consultant, Payment Intelligence

@ Visa | Washington, DC, United States

Corporate Counsel, Compliance

@ Okta | San Francisco, CA; Bellevue, WA; Chicago, IL; New York City; Washington, DC; Austin, TX

Security Operations Engineer

@ Samsara | Remote - US