all InfoSec news
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
April 22, 2024, 4:11 a.m. | Zhenyang Ni, Rui Ye, Yuxi Wei, Zhen Xiang, Yanfeng Wang, Siheng Chen
cs.CR updates on arXiv.org arxiv.org
Abstract: Vision-Large-Language-models(VLMs) have great application prospects in autonomous driving. Despite the ability of VLMs to comprehend and make decisions in complex scenarios, their integration into safety-critical autonomous driving systems poses serious security risks. In this paper, we propose BadVLMDriver, the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects. Unlike existing backdoor attacks against VLMs that rely on digital modifications, BadVLMDriver uses common physical items, such as a …
application arxiv attack autonomous autonomous driving backdoor backdoor attack can critical cs.cr driving great integration language language models large large-language-models physical risks safety safety-critical security security risks serious serious security systems
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Associate Engineer (Security Operations Centre)
@ People Profilers | Singapore, Singapore, Singapore
DevSecOps Engineer
@ Australian Payments Plus | Sydney, New South Wales, Australia
Senior Cybersecurity Specialist
@ SmartRecruiters Inc | Poland, Poland