Oct. 4, 2022, 1:20 a.m. | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

cs.CR updates on arXiv.org arxiv.org

We study the problem of differentially private (DP) fine-tuning of large
pre-trained models -- a recent privacy-preserving approach suitable for solving
downstream tasks with sensitive data. Existing work has demonstrated that high
accuracy is possible under strong privacy constraint, yet requires significant
computational overhead or modifications to the network architecture.


We propose differentially private bias-term fine-tuning (DP-BiTFiT), which
matches the state-of-the-art accuracy for DP algorithms and the efficiency of
the standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network …

bias foundation

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Level 1 SOC Analyst

@ Telefonica Tech | Dublin, Ireland

Specialist, Database Security

@ OP Financial Group | Helsinki, FI

Senior Manager, Cyber Offensive Security

@ Edwards Lifesciences | Poland-Remote

Information System Security Officer

@ Booz Allen Hamilton | USA, AL, Huntsville (4200 Rideout Rd SW)

Senior Security Analyst - Protective Security (Open to remote across ANZ)

@ Canva | Sydney, Australia