Aug. 2, 2022, 1:20 a.m. | Yulong Cao, Danfei Xu, Xinshuo Weng, Zhuoqing Mao, Anima Anandkumar, Chaowei Xiao, Marco Pavone

cs.CR updates on arXiv.org arxiv.org

Trajectory prediction using deep neural networks (DNNs) is an essential
component of autonomous driving (AD) systems. However, these methods are
vulnerable to adversarial attacks, leading to serious consequences such as
collisions. In this work, we identify two key ingredients to defend trajectory
prediction models against adversarial attacks including (1) designing effective
adversarial training methods and (2) adding domain-specific data augmentation
to mitigate the performance degradation on clean data. We demonstrate that our
method is able to improve the performance by …

adversarial attacks lg prediction

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Team Lead, Security Operations Center, Cyber Risk

@ Kroll | United Kingdom

Cyber Security Risk Analyst

@ College Board | Remote - Virginia

Lead - IT Security Engineer

@ Bosch Group | BENGALURU, India

Project Cybersecurity Manager

@ Alstom | Bengaluru, KA, IN

Security Consultant

@ CloudSEK | Bengaluru, Karnataka, India