Nov. 9, 2023, 2:10 a.m. | Hanlin Zhang, Benjamin L. Edelman, Danilo Francati, Daniele Venturi, Giuseppe Ateniese, Boaz Barak

cs.CR updates on arXiv.org arxiv.org

Watermarking generative models consists of planting a statistical signal
(watermark) in a model's output so that it can be later verified that the
output was generated by the given model. A strong watermarking scheme satisfies
the property that a computationally bounded attacker cannot erase the watermark
without causing significant quality degradation. In this paper, we study the
(im)possibility of strong watermarking schemes. We prove that, under
well-specified and natural assumptions, strong watermarking is impossible to
achieve. This holds even in …

attacker generated generative generative models property signal verified watermarking watermarks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Officer Hospital Laguna Beach

@ Allied Universal | Laguna Beach, CA, United States

Sr. Cloud DevSecOps Engineer

@ Oracle | NOIDA, UTTAR PRADESH, India

Cloud Operations Security Engineer

@ Elekta | Crawley - Cornerstone

Cybersecurity – Senior Information System Security Manager (ISSM)

@ Boeing | USA - Seal Beach, CA

Engineering -- Tech Risk -- Security Architecture -- VP -- Dallas

@ Goldman Sachs | Dallas, Texas, United States