all InfoSec news
Differentially Private Knowledge Distillation via Synthetic Text Generation
March 5, 2024, 3:11 p.m. | James Flemings, Murali Annavaram
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language models (LLMs) are achieving state-of-the-art performance in many different downstream tasks. However, the increasing urgency of data privacy requires LLMs to train with Differential Privacy (DP) on private data. Concurrently it is also necessary to compress LLMs for real-life deployments on resource-constrained devices or latency-sensitive applications. Differential privacy and model compression generally must trade off utility loss to achieve their objectives. Moreover, concurrently achieving both can result in even more utility loss. To …
art arxiv cs.cl cs.cr cs.lg data data privacy devices differential privacy knowledge language language models large latency life llms performance privacy private private data real resource sensitive state synthetic text train
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
DevSecOps Engineer
@ LinQuest | Beavercreek, Ohio, United States
Senior Developer, Vulnerability Collections (Contractor)
@ SecurityScorecard | Remote (Turkey or Latin America)
Cyber Security Intern 03416 NWSOL
@ North Wind Group | RICHLAND, WA
Senior Cybersecurity Process Engineer
@ Peraton | Fort Meade, MD, United States
Sr. Manager, Cybersecurity and Info Security
@ AESC | Smyrna, TN 37167, Smyrna, TN, US | Santa Clara, CA 95054, Santa Clara, CA, US | Florence, SC 29501, Florence, SC, US | Bowling Green, KY 42101, Bowling Green, KY, US