May 24, 2024, 4:12 a.m. | Aldo Gael Carranza, Rezsa Farahani, Natalia Ponomareva, Alex Kurakin, Matthew Jagielski, Milad Nasr

cs.CR updates on arXiv.org arxiv.org

arXiv:2305.05973v3 Announce Type: replace-cross
Abstract: We address the challenge of ensuring differential privacy (DP) guarantees in training deep retrieval systems. Training these systems often involves the use of contrastive-style losses, which are typically non-per-example decomposable, making them difficult to directly DP-train with since common techniques require per-example gradients. To address this issue, we propose an approach that prioritizes ensuring query privacy prior to training a deep retrieval system. Our method employs DP language models (LMs) to generate private synthetic queries …

address arxiv challenge cs.cl cs.cr cs.ir differential privacy language language models losses making non privacy private query synthetic systems techniques train training

Cyber Tools - Software Test & Evaluation Engineer

@ Noblis | Linthicum, MD, United States

Senior Technical Marketing Engineer (Threat Prevention)

@ Palo Alto Networks | Santa Clara, CA, United States

Senior Technical Support Engineer, EMEA - Cortex XSOAR

@ Palo Alto Networks | Warsaw, Poland

Senior Technical Marketing Engineer (AI/ML-powered Cloud Security)

@ Palo Alto Networks | Santa Clara, CA, United States

Senior Information Security Engineer

@ ServiceNow | Sydney, Australia

Principal Information Security Engineer

@ Mastercard | Pune, India