May 22, 2024, 4:11 a.m. | Marsalis Gibson, David Babazadeh, Claire Tomlin, Shankar Sastry

cs.CR updates on

arXiv:2401.10313v2 Announce Type: replace
Abstract: Adversarial attacks on learning-based multi-modal trajectory predictors have already been demonstrated. However, there are still open questions about the effects of perturbations on inputs other than state histories, and how these attacks impact downstream planning and control. In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer. The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent …

adversarial adversarial attacks analysis arxiv attacks autonomous autonomous driving cars control cs.lg driving hacking identify impact inputs modal planning prediction questions security state trajectory vulnerabilities

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Security Compliance Manager

@ Aptiv | USA Boston Software Office 100 Northern - Eng

Senior Radar Threat Analyst | Secret clearance

@ Northern Trust | USA CA Point Mugu - 575 I Ave, Bldg 3015 (CAC212)

Space Information Systems Security Engineer (ISSE)

@ Parsons Corporation | USA VA Chantilly (Client Site)

Information Systems Security Manager -Journeyman

@ Parsons Corporation | USA CO Colorado Springs (5450 Tech Center Drive)

Information Systems Security Officer (ISSO) II

@ Northern Trust | USA CA Riverside - Customer Proprietary (CAC225)