all InfoSec news
Set-Membership Inference Attacks using Data Watermarking. (arXiv:2307.15067v1 [cs.CV])
July 31, 2023, 1:10 a.m. | Mike Laszkiewicz, Denis Lukovnikov, Johannes Lederer, Asja Fischer
cs.CR updates on arXiv.org arxiv.org
In this work, we propose a set-membership inference attack for generative
models using deep image watermarking techniques. In particular, we demonstrate
how conditional sampling from a generative model can reveal the watermark that
was injected into parts of the training data. Our empirical results demonstrate
that the proposed watermarking technique is a principled approach for detecting
the non-consensual use of image data in training generative models.
attack attacks data generative image parts results techniques training watermarking work
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 5 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 5 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 5 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Principal Security Engineer
@ Activision Blizzard | Work from Home - CA
Security Engineer- Systems Integration
@ Meta | Bellevue, WA | Menlo Park, CA | New York City
Lead Security Engineer (Digital Forensic and IR Analyst)
@ Blue Yonder | Hyderabad
Senior Principal IAM Engineering Program Manager Cybersecurity
@ Providence | Redmond, WA, United States
Information Security Analyst II or III
@ Entergy | The Woodlands, Texas, United States