May 24, 2024, 4:12 a.m. | Aldo Gael Carranza, Rezsa Farahani, Natalia Ponomareva, Alex Kurakin, Matthew Jagielski, Milad Nasr

cs.CR updates on arXiv.org arxiv.org

arXiv:2305.05973v3 Announce Type: replace-cross
Abstract: We address the challenge of ensuring differential privacy (DP) guarantees in training deep retrieval systems. Training these systems often involves the use of contrastive-style losses, which are typically non-per-example decomposable, making them difficult to directly DP-train with since common techniques require per-example gradients. To address this issue, we propose an approach that prioritizes ensuring query privacy prior to training a deep retrieval system. Our method employs DP language models (LMs) to generate private synthetic queries …

address arxiv challenge cs.cl cs.cr cs.ir differential privacy language language models losses making non privacy private query synthetic systems techniques train training

Information Technology Specialist I: Windows Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, California

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Solutions Expert

@ General Dynamics Information Technology | USA MD Home Office (MDHOME)

Physical Security Specialist

@ The Aerospace Corporation | Chantilly

System Administrator

@ General Dynamics Information Technology | USA VA Newington - Customer Proprietary (VAC395)

Microsoft Exchange & 365 Systems Engineer - TS/SCI with Polygraph

@ General Dynamics Information Technology | USA VA Chantilly - 14700 Lee Rd (VAS100)