Dec. 9, 2022, 2:10 a.m. | Ashwinee Panda, Xinyu Tang, Vikash Sehwag, Saeed Mahloujifar, Prateek Mittal

cs.CR updates on arXiv.org arxiv.org

A major direction in differentially private machine learning is
differentially private fine-tuning: pretraining a model on a source of "public
data" and transferring the extracted features to downstream tasks.


This is an important setting because many industry deployments fine-tune
publicly available feature extractors on proprietary data for downstream tasks.


In this paper, we use features extracted from state-of-the-art open source
models to solve benchmark tasks in computer vision and natural language
processing using differentially private fine-tuning. Our key insight is …

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Regional Leader, Cyber Crisis Communications

@ Google | United Kingdom

Regional Intelligence Manager, Compliance, Safety and Risk Management

@ Google | London, UK

Senior Analyst, Endpoint Security

@ Scotiabank | Toronto, ON, CA, M1K5L1

Software Engineer, Security/Privacy, Google Cloud

@ Google | Bengaluru, Karnataka, India

Senior Security Engineer

@ Coinbase | Remote - USA