Oct. 21, 2022, 1:24 a.m. | Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei Zhou

cs.CR updates on arXiv.org arxiv.org

As a booming research area in the past decade, deep learning technologies
have been driven by big data collected and processed on an unprecedented scale.
However, the sensitive information in the collected training data raises
privacy concerns. Recent research indicated that deep learning models are
vulnerable to various privacy attacks, including membership inference attacks,
attribute inference attacks, and gradient inversion attacks. It is noteworthy
that the performance of the attacks varies from model to model. In this paper,
we conduct …

architecture deep learning impact privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote