May 26, 2023, 1:19 a.m. | Saiyue Lyu, Margarita Vinaroz, Michael F. Liu, Mijung Park

cs.CR updates on arXiv.org arxiv.org

Diffusion models (DMs) are widely used for generating high-quality image
datasets. However, since they operate directly in the high-dimensional pixel
space, optimization of DMs is computationally expensive, requiring long
training times. This contributes to large amounts of noise being injected into
the differentially private learning process, due to the composability property
of differential privacy. To address this challenge, we propose training Latent
Diffusion Models (LDMs) with differential privacy. LDMs use powerful
pre-trained autoencoders to reduce the high-dimensional pixel space to …

composability datasets diffusion models dms high large noise optimization pixel private process quality space training

More from arxiv.org / cs.CR updates on arXiv.org

Toronto Transit Commission (TTC) - Chief Information Security Officer (CISO)

@ BIPOC Executive Search Inc. | Toronto, Ontario, Canada

Unit Manager for Cyber Security Culture & Competence

@ H&M Group | Stockholm, Sweden

Junior Security Engineer

@ Pipedrive | Tallinn, Estonia

Splunk Engineer (TS/SCI)

@ GuidePoint Security LLC | Huntsville, AL

DevSecOps Engineer, SRE (Top Secret) - 1537

@ Reinventing Geospatial (RGi) | Herndon, VA

Governance, Risk and Compliance (GRC) Lead

@ Leidos | Brisbane, Australia