May 16, 2024, 4:12 a.m. | Rajiv Thummala, Shristi Sharma, Matteo Calabrese, Gregory Falco

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.08834v1 Announce Type: cross
Abstract: Spacecraft are among the earliest autonomous systems. Their ability to function without a human in the loop have afforded some of humanity's grandest achievements. As reliance on autonomy grows, space vehicles will become increasingly vulnerable to attacks designed to disrupt autonomous processes-especially probabilistic ones based on machine learning. This paper aims to elucidate and demonstrate the threats that adversarial machine learning (AML) capabilities pose to spacecraft. First, an AML threat taxonomy for spacecraft is introduced. …

adversarial arxiv attacks autonomous autonomy cs.ai cs.cr cs.lg disrupt function human humanity loop machine machine learning processes space spacecraft systems threats vehicles vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Senior Security Researcher - Linux MacOS EDR (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Sr. Manager, NetSec GTM Programs

@ Palo Alto Networks | Santa Clara, CA, United States

SOC Analyst I

@ Fortress Security Risk Management | Cleveland, OH, United States