July 31, 2023, 1:10 a.m. | Mike Laszkiewicz, Denis Lukovnikov, Johannes Lederer, Asja Fischer

cs.CR updates on arXiv.org arxiv.org

In this work, we propose a set-membership inference attack for generative
models using deep image watermarking techniques. In particular, we demonstrate
how conditional sampling from a generative model can reveal the watermark that
was injected into parts of the training data. Our empirical results demonstrate
that the proposed watermarking technique is a principled approach for detecting
the non-consensual use of image data in training generative models.

attack attacks data generative image parts results techniques training watermarking work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States