April 24, 2024, 4:11 a.m. | Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons

cs.CR updates on arXiv.org arxiv.org

arXiv:2309.00236v3 Announce Type: replace-cross
Abstract: Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the Eiffel Tower is now …

adversarial algorithm arxiv can control cs.cl cs.cr cs.lg discover focus foundation foundation models general generative generative models image images input language malicious malicious actors runtime training work

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France