all InfoSec news
Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models
May 24, 2024, 4:12 a.m. | Aldo Gael Carranza, Rezsa Farahani, Natalia Ponomareva, Alex Kurakin, Matthew Jagielski, Milad Nasr
cs.CR updates on arXiv.org arxiv.org
Abstract: We address the challenge of ensuring differential privacy (DP) guarantees in training deep retrieval systems. Training these systems often involves the use of contrastive-style losses, which are typically non-per-example decomposable, making them difficult to directly DP-train with since common techniques require per-example gradients. To address this issue, we propose an approach that prioritizes ensuring query privacy prior to training a deep retrieval system. Our method employs DP language models (LMs) to generate private synthetic queries …
address arxiv challenge cs.cl cs.cr cs.ir differential privacy language language models losses making non privacy private query synthetic systems techniques train training
More from arxiv.org / cs.CR updates on arXiv.org
Beyond Labeling Oracles: What does it mean to steal ML models?
2 days, 11 hours ago |
arxiv.org
Single Round-trip Hierarchical ORAM via Succinct Indices
2 days, 11 hours ago |
arxiv.org
Noise-Aware Differentially Private Regression via Meta-Learning
2 days, 11 hours ago |
arxiv.org
Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs
2 days, 11 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
Cyber Tools - Software Test & Evaluation Engineer
@ Noblis | Linthicum, MD, United States
Senior Technical Marketing Engineer (Threat Prevention)
@ Palo Alto Networks | Santa Clara, CA, United States
Senior Technical Support Engineer, EMEA - Cortex XSOAR
@ Palo Alto Networks | Warsaw, Poland
Senior Technical Marketing Engineer (AI/ML-powered Cloud Security)
@ Palo Alto Networks | Santa Clara, CA, United States
Senior Information Security Engineer
@ ServiceNow | Sydney, Australia
Principal Information Security Engineer
@ Mastercard | Pune, India