March 6, 2023, 2:10 a.m. | Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

cs.CR updates on arXiv.org arxiv.org

Machine Learning (ML) architectures have been applied to several applications
that involve sensitive data, where a guarantee of users' data privacy is
required. Differentially Private Stochastic Gradient Descent (DPSGD) is the
state-of-the-art method to train privacy-preserving models. However, DPSGD
comes at a considerable accuracy loss leading to sub-optimal privacy/utility
trade-offs. Towards investigating new ground for better privacy-utility
trade-off, this work questions; (i) if models' hyperparameters have any
inherent impact on ML models' privacy-preserving properties, and (ii) if
models' hyperparameters have …

accuracy applications art data data privacy guarantee impact loss machine machine learning privacy private questions sensitive data state trade train utility work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior Manager, Security Compliance (Customer Trust)

@ Box | Tokyo

Cyber Security Engineering Specialist

@ SITEC Consulting | St. Louis, MO, USA 63101

Technical Security Analyst

@ Spire Healthcare | United Kingdom

Embedded Threat Intelligence Team Account Manager

@ Sibylline Ltd | Austin, Texas, United States

Bank Protection Security Officer

@ Allied Universal | Portland, OR, United States