all InfoSec news
On the amplification of security and privacy risks by post-hoc explanations in machine learning models. (arXiv:2206.14004v1 [cs.LG])
June 29, 2022, 1:20 a.m. | Pengrui Quan, Supriyo Chakraborty, Jeya Vikranth Jeyakumar, Mani Srivastava
cs.CR updates on arXiv.org arxiv.org
A variety of explanation methods have been proposed in recent years to help
users gain insights into the results returned by neural networks, which are
otherwise complex and opaque black-boxes. However, explanations give rise to
potential side-channels that can be leveraged by an adversary for mounting
attacks on the system. In particular, post-hoc explanation methods that
highlight input dimensions according to their importance or relevance to the
result also leak information that weakens security and privacy. In this work,
we …
amplification lg machine machine learning machine learning models privacy security
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Information Systems Security Officer (ISSO), Junior
@ Dark Wolf Solutions | Remote / Dark Wolf Locations
Cloud Security Engineer
@ ManTech | REMT - Remote Worker Location
SAP Security & GRC Consultant
@ NTT DATA | HYDERABAD, TG, IN
Security Engineer 2 - Adversary Simulation Operations
@ Datadog | New York City, USA