Dec. 5, 2022, 2:10 a.m. | Jianfei Yang, Han Zou, Lihua Xie

cs.CR updates on arXiv.org arxiv.org

Deep neural networks have empowered accurate device-free human activity
recognition, which has wide applications. Deep models can extract robust
features from various sensors and generalize well even in challenging
situations such as data-insufficient cases. However, these systems could be
vulnerable to input perturbations, i.e. adversarial attacks. We empirically
demonstrate that both black-box Gaussian attacks and modern adversarial
white-box attacks can render their accuracies to plummet. In this paper, we
firstly point out that such phenomenon can bring severe safety hazards …

adversarial attack device free human recognition

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

SOC Cyber Threat Intelligence Expert

@ Amexio | Luxembourg, Luxembourg, Luxembourg

Systems Engineer - SecOps

@ Fortinet | Dubai, Dubai, United Arab Emirates

Ingénieur Cybersécurité Gouvernance des projets AMR H/F

@ ASSYSTEM | Lyon, France

Senior DevSecOps Consultant

@ Computacenter | Birmingham, GB, B37 7YS