all InfoSec news
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data. (arXiv:2201.04569v3 [cs.CR] UPDATED)
Aug. 10, 2022, 1:20 a.m. | Sunder Ali Khowaja, Ik Hyun Lee, Kapal Dev, Muhammad Aslam Jarwar, Nawab Muhammad Faseeh Qureshi
cs.CR updates on arXiv.org arxiv.org
The past decade has seen a rapid adoption of Artificial Intelligence (AI),
specifically the deep learning networks, in Internet of Medical Things (IoMT)
ecosystem. However, it has been shown recently that the deep learning networks
can be exploited by adversarial attacks that not only make IoMT vulnerable to
the data theft but also to the manipulation of medical diagnosis. The existing
studies consider adding noise to the raw IoMT data or model parameters which
not only reduces the overall performance …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cyber Threat Analyst
@ Peraton | Morrisville, NC, United States
Kyndryl Offensive Security Professional - Threat-Led Penetration Testing (TLPT) and Red Teaming
@ Kyndryl | Sao Paulo (KBR51645) WeWork Office
Consultant en Cyber Sécurité - Spécialiste PKI H/F
@ Devoteam | Levallois-Perret, France
Cloud Security Architect - Advisor (Remote)
@ Fannie Mae | Reston, VA, United States
OT Cybersecurity Engineer
@ SBM Offshore | Bengaluru, IN, 560071