May 14, 2024, 4:11 a.m. | Josephine Passananti, Stanley Wu, Shawn Shan, Haitao Zheng, Ben Y. Zhao

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.06865v1 Announce Type: cross
Abstract: Generative AI models are often used to perform mimicry attacks, where a pretrained model is fine-tuned on a small sample of images to learn to mimic a specific artist of interest. While researchers have introduced multiple anti-mimicry protection tools (Mist, Glaze, Anti-Dreambooth), recent evidence points to a growing trend of mimicry models using videos as sources of training data. This paper presents our experiences exploring techniques to disrupt style mimicry on video imagery. We first …

ai models artist arxiv attacks cs.cr cs.cv evidence generative generative ai images interest learn mimic mimicry points protection researchers sample tools video

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Cyber Security Specialist

@ Ball Corporation | SAO JOSE DOS CAMPOS, São Paulo, BR, 12242-000

Cybersecurity Strategy & Data Systems Manager

@ Mitsubishi Heavy Industries | Orlando, FL, US, 32809