Oct. 5, 2022, 1:20 a.m. | Franziska Boenisch, Christopher Mühl, Roy Rinberg, Jannis Ihrig, Adam Dziedzic

cs.CR updates on arXiv.org arxiv.org

Applying machine learning (ML) to sensitive domains requires privacy
protection of the underlying training data through formal privacy frameworks,
such as differential privacy (DP). Yet, usually, the privacy of the training
data comes at the cost of the resulting ML models' utility. One reason for this
is that DP uses one uniform privacy budget epsilon for all training data
points, which has to align with the strictest privacy requirement encountered
among all data holders. In practice, different data holders have …

machine machine learning privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Oracle EBS DevSecOps Developer

@ Accenture Federal Services | Arlington, VA

Information Security GRC Specialist - Risk Program Lead

@ Western Digital | Irvine, CA, United States

Senior Cyber Operations Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States

AI Cybersecurity Architect

@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)