all InfoSec news
Pre-training Differentially Private Models with Limited Public Data
March 1, 2024, 5:11 a.m. | Zhiqi Bu, Xinwei Zhang, Mingyi Hong, Sheng Zha, George Karypis
cs.CR updates on arXiv.org arxiv.org
Abstract: The superior performance of large foundation models relies on the use of massive amounts of high-quality data, which often contain sensitive, private and copyrighted material that requires formal protection. While differential privacy (DP) is a prominent method to gauge the degree of security provided to the models, its application is commonly limited to the model fine-tuning stage, due to the performance degradation when applying DP during the pre-training stage. Consequently, DP is yet not capable …
arxiv cs.cr cs.lg data differential privacy foundation foundation models gauge high large material performance privacy private protection public quality security sensitive training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Chief Information Security Officer (CISO)
@ CSIRO | Adelaide, SA, AU
Cyber Security Engineer - Clearance Required
@ Logistics Management Institute | Remote, United States
Security Engineer, Infrastructure
@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City
TS Senior Security Engineer
@ UL Solutions | Mumbai, Maharashtra, India
Security Operations Engineer
@ Fortis Games | Remote - United Kingdom
Senior Product Compliance Engineer
@ Element Biosciences | San Diego - Headquarters