Jan. 25, 2023, 2:10 a.m. | Hailong Hu, Jun Pang

cs.CR updates on arXiv.org arxiv.org

Recent years have witnessed the tremendous success of diffusion models in
data synthesis. However, when diffusion models are applied to sensitive data,
they also give rise to severe privacy concerns. In this paper, we
systematically present the first study about membership inference attacks
against diffusion models, which aims to infer whether a sample was used to
train the model. Two attack methods are proposed, namely loss-based and
likelihood-based attacks. Our attack methods are evaluated on several
state-of-the-art diffusion models, over …

attack attacks data diffusion models loss privacy sensitive data study train

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Intermediate Security Engineer, (Incident Response, Trust & Safety)

@ GitLab | Remote, US

Journeyman Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Project Manager II - Compliance

@ Critical Path Institute | Tucson, AZ, USA

Junior System Engineer (m/w/d) Cyber Security 1

@ Deutsche Telekom | Leipzig, Deutschland