all InfoSec News
Skellam Mixture Mechanism: a Novel Approach to Federated Learning with Differential Privacy
July 4, 2024, 11:02 a.m. | Ergute Bao, Yizheng Zhu, Xiaokui Xiao, Yin Yang, Beng Chin Ooi, Benjamin Hong Meng Tan, Khin Mi Mi Aung
cs.CR updates on arXiv.org arxiv.org
Abstract: Deep neural networks have strong capabilities of memorizing the underlying training data, which can be a serious privacy concern. An effective solution to this problem is to train models with differential privacy, which provides rigorous privacy guarantees by injecting random noise to the gradients. This paper focuses on the scenario where sensitive data are distributed among multiple participants, who jointly train a model through federated learning (FL), using both secure multiparty computation (MPC) …
arxiv can capabilities cs.cr cs.lg data differential privacy federated federated learning mechanism networks neural networks noise novel privacy privacy concern problem random serious solution train training training data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cyber Security Project Engineer
@ Dezign Concepts LLC | Chantilly, VA
Cloud Cybersecurity Incident Response Lead
@ Maveris | Martinsburg, West Virginia, United States
Sr Staff Security Researcher (Malware Research - Antivirus Systems)
@ Palo Alto Networks | Santa Clara, CA, United States
Identity & Access Management, Senior Associate
@ PwC | Toronto - 18 York Street
Senior Manager, AI Security
@ Lloyds Banking Group | London 10 Gresham Street
Senior Red Team Engineer
@ Adobe | Remote California