all InfoSec news
On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?
March 1, 2024, 5:11 a.m. | Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh
cs.CR updates on arXiv.org arxiv.org
Abstract: Differentially private (DP) machine learning pipelines typically involve a two-phase process: non-private pre-training on a public dataset, followed by fine-tuning on private data using DP optimization techniques. In the DP setting, it has been observed that full fine-tuning may not always yield the best test accuracy, even for in-distribution data. This paper (1) analyzes the training dynamics of DP linear probing (LP) and full fine-tuning (FT), and (2) explores the phenomenon of sequential fine-tuning, starting …
arxiv convergence cs.ai cs.cr cs.lg data dataset fine-tuning machine machine learning math.oc may non optimization pipelines private private data probe process public techniques training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
XDR Detection Engineer
@ SentinelOne | Italy
Security Engineer L2
@ NTT DATA | A Coruña, Spain
Cyber Security Assurance Manager
@ Babcock | Portsmouth, GB, PO6 3EN
Senior Threat Intelligence Researcher
@ CloudSEK | Bengaluru, Karnataka, India
Cybersecurity Analyst 1
@ Spry Methods | Washington, DC (Hybrid)
Security Infrastructure DevOps Engineering Manager
@ Apple | Austin, Texas, United States