May 15, 2024, 4:11 a.m. | Andre Kassis, Urs Hengartner

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.08363v1 Announce Type: new
Abstract: Reports regarding the misuse of $\textit{Generative AI}$ ($\textit{GenAI}$) to create harmful deepfakes are emerging daily. Recently, defensive watermarking, which enables $\textit{GenAI}$ providers to hide fingerprints in their images to later use for deepfake detection, has been on the rise. Yet, its potential has not been fully explored. We present $\textit{UnMarker}$ -- the first practical $\textit{universal}$ attack on defensive watermarking. Unlike existing attacks, $\textit{UnMarker}$ requires no detector feedback, no unrealistic knowledge of the scheme or similar …

arxiv attack cs.cr cs.cv cs.lg daily deepfake deepfake detection deepfakes defensive detection emerging fingerprints genai generative generative ai hide images reports watermarking

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Information Security (Network) Consultant

@ Xcellink Pte Ltd | Singapore, Singapore, Singapore

Information Security Management System Manager

@ Babcock | Bristol, GB, BS3 2HQ