Dec. 9, 2022, 2:10 a.m. | Ergute Bao, Yizheng Zhu, Xiaokui Xiao, Yin Yang, Beng Chin Ooi, Benjamin Hong Meng Tan, Khin Mi Mi Aung

cs.CR updates on arXiv.org arxiv.org

Deep neural networks have strong capabilities of memorizing the underlying
training data, which can be a serious privacy concern. An effective solution to
this problem is to train models with differential privacy, which provides
rigorous privacy guarantees by injecting random noise to the gradients. This
paper focuses on the scenario where sensitive data are distributed among
multiple participants, who jointly train a model through federated learning
(FL), using both secure multiparty computation (MPC) to ensure the
confidentiality of each gradient …

differential privacy federated learning novel privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Security Cloud Solution Architect

@ Microsoft | London, London, United Kingdom

Compliance Program Analyst

@ SailPoint | United States

Software Engineer III, Infrastructure, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Cryptography Expert

@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine

Senior Cyber Intelligence Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States