all InfoSec news
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models
April 22, 2024, 4:11 a.m. | Zhenyang Ni, Rui Ye, Yuxi Wei, Zhen Xiang, Yanfeng Wang, Siheng Chen
cs.CR updates on arXiv.org arxiv.org
Abstract: Vision-Large-Language-models(VLMs) have great application prospects in autonomous driving. Despite the ability of VLMs to comprehend and make decisions in complex scenarios, their integration into safety-critical autonomous driving systems poses serious security risks. In this paper, we propose BadVLMDriver, the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects. Unlike existing backdoor attacks against VLMs that rely on digital modifications, BadVLMDriver uses common physical items, such as a …
application arxiv attack autonomous autonomous driving backdoor backdoor attack can critical cs.cr driving great integration language language models large large-language-models physical risks safety safety-critical security security risks serious serious security systems
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Security Engineer - Detection and Response
@ Fastly, Inc. | US (Remote)
Application Security Engineer
@ Solidigm | Zapopan, Mexico
Defensive Cyber Operations Engineer-Mid
@ ISYS Technologies | Aurora, CO, United States
Manager, Information Security GRC
@ OneTrust | Atlanta, Georgia
Senior Information Security Analyst | IAM
@ EBANX | Curitiba or São Paulo
Senior Information Security Engineer, Cloud Vulnerability Research
@ Google | New York City, USA; New York, USA