Feb. 5, 2024, 8:10 p.m. | Guangsheng Zhang Bo Liu Huan Tian Tianqing Zhu Ming Ding Wanlei Zhou

cs.CR updates on arXiv.org arxiv.org

As a booming research area in the past decade, deep learning technologies have been driven by big data collected and processed on an unprecedented scale. However, privacy concerns arise due to the potential leakage of sensitive information from the training data. Recent research has revealed that deep learning models are vulnerable to various privacy attacks, including membership inference attacks, attribute inference attacks, and gradient inversion attacks. Notably, the efficacy of these attacks varies from model to model. In this paper, …

architecture area attacks big big data cnns cs.ai cs.cr cs.lg data deep learning impact information privacy privacy concerns processed research scale sensitive sensitive information stat.ml study technologies training transformers unprecedented

IT Security Manager

@ Timocom GmbH | Erkrath, Germany

Cybersecurity Service Engineer

@ Motorola Solutions | Singapore, Singapore

Sr Cybersecurity Vulnerability Specialist

@ Health Care Service Corporation | Chicago Illinois HQ (300 E. Randolph Street)

Associate, Info Security (SOC) analyst

@ Evolent | Pune

Public Cloud Development Security and Operations (DevSecOps) Manager

@ Danske Bank | Copenhagen K, Denmark

Cybersecurity Risk Analyst IV

@ Computer Task Group, Inc | United States