all InfoSec news
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data. (arXiv:2201.04569v1 [cs.CR])
Web: http://arxiv.org/abs/2201.04569
Jan. 13, 2022, 2:20 a.m. | Sunder Ali Khowaja, Ik Hyun Lee, Kapal Dev, Muhammad Aslam Jarwar, Nawab Muhammad Faseeh Qureshi
cs.CR updates on arXiv.org arxiv.org
The past decade has seen a rapid adoption of Artificial Intelligence (AI),
specifically the deep learning networks, in Internet of Medical Things (IoMT)
ecosystem. However, it has been shown recently that the deep learning networks
can be exploited by adversarial attacks that not only make IoMT vulnerable to
the data theft but also to the manipulation of medical diagnosis. The existing
studies consider adding noise to the raw IoMT data or model parameters which
not only reduces the overall performance …
More from arxiv.org / cs.CR updates on arXiv.org
Latest InfoSec / Cyber Security Jobs
Head of Information Security
@ Canny | Remote
Information Technology Specialist (INFOSEC)
@ U.S. Securities & Exchange Commission | Washington, D.C.
Information Security Manager - $90K-$180K - MANAG002176
@ Sound Transit | Seattle, WA
Sr. Software Security Architect
@ SAS | Remote
Senior Incident Responder
@ CipherTechs, Inc. | Remote
Data Security DevOps Engineer Senior/Intermediate
@ University of Michigan - ITS | Ann Arbor, MI