all InfoSec news
Differentially Private Bias-Term only Fine-tuning of Foundation Models. (arXiv:2210.00036v2 [cs.LG] UPDATED)
Oct. 5, 2022, 1:20 a.m. | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
cs.CR updates on arXiv.org arxiv.org
We study the problem of differentially private (DP) fine-tuning of large
pre-trained models -- a recent privacy-preserving approach suitable for solving
downstream tasks with sensitive data. Existing work has demonstrated that high
accuracy is possible under strong privacy constraint, yet requires significant
computational overhead or modifications to the network architecture.
We propose differentially private bias-term fine-tuning (DP-BiTFiT), which
matches the state-of-the-art accuracy for DP algorithms and the efficiency of
the standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
IT Security Manager
@ Teltonika | Vilnius/Kaunas, VL, LT
Security Officer - Part Time - Harrah's Gulf Coast
@ Caesars Entertainment | Biloxi, MS, United States
DevSecOps Full-stack Developer
@ Peraton | Fort Gordon, GA, United States
Cybersecurity Cooperation Lead
@ Peraton | Stuttgart, AE, United States
Cybersecurity Engineer - Malware & Forensics
@ ManTech | 201DU - Customer Site,Herndon, VA