Aug. 4, 2022, 1:20 a.m. | M.A.P. Chamikara, Dongxi Liu, Seyit Camtepe, Surya Nepal, Marthie Grobler, Peter Bertok, Ibrahim Khalil

cs.CR updates on arXiv.org arxiv.org

Advanced adversarial attacks such as membership inference and model
memorization can make federated learning (FL) vulnerable and potentially leak
sensitive private data. Local differentially private (LDP) approaches are
gaining more popularity due to stronger privacy notions and native support for
data distribution compared to other differentially private (DP) solutions.
However, DP approaches assume that the FL server (that aggregates the models)
is honest (run the FL protocol honestly) or semi-honest (run the FL protocol
honestly while also trying to learn …

differential privacy federated learning local privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote