Nov. 5, 2023, 6:10 a.m. | Christopher A. Choquette-Choo, Arun Ganesh, Ryan McKenna, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta, Zheng Xu

cs.CR updates on arXiv.org arxiv.org

Matrix factorization (MF) mechanisms for differential privacy (DP) have
substantially improved the state-of-the-art in privacy-utility-computation
tradeoffs for ML applications in a variety of scenarios, but in both the
centralized and federated settings there remain instances where either MF
cannot be easily applied, or other algorithms provide better tradeoffs
(typically, as $\epsilon$ becomes small). In this work, we show how MF can
subsume prior state-of-the-art algorithms in both federated and centralized
training settings, across all privacy budgets. The key technique throughout …

algorithms applications art computation differential privacy federated matrix privacy private settings state training utility

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Security Engineer II- Full stack Java with React

@ JPMorgan Chase & Co. | Hyderabad, Telangana, India

Cybersecurity SecOps

@ GFT Technologies | Mexico City, MX, 11850

Senior Information Security Advisor

@ Sun Life | Sun Life Toronto One York

Contract Special Security Officer (CSSO) - Top Secret Clearance

@ SpaceX | Hawthorne, CA

Early Career Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts