Dec. 15, 2022, 2:17 a.m. | Ambrish Rawat, Killian Levacher, Mathieu Sinn

cs.CR updates on arXiv.org arxiv.org

Deep Generative Models (DGMs) are a popular class of deep learning models
which find widespread use because of their ability to synthesize data from
complex, high-dimensional manifolds. However, even with their increasing
industrial adoption, they haven't been subject to rigorous security and privacy
analysis. In this work we examine one such aspect, namely backdoor attacks on
DGMs which can significantly limit the applicability of pre-trained models
within a model supply chain and at the very least cause massive reputation
damage …

attacks backdoor gan

Principal Engineer - DLP Endpoint Security

@ Netskope | Bengaluru, Karnataka, India

Security Consultant (m/w/d)

@ Deutsche Telekom | Berlin, Deutschland

Security Engineer

@ IDEMIA | Haarlem, NL, 2031 CC

CyberSecurity Forensics and Incident Response Analyst

@ Bosch Group | Pittsburgh, PA, United States

Cyber MS MDR - Sr Associate

@ KPMG India | Bengaluru, Karnataka, India

Senior Lead Cybersecurity Architect-Threat modeling, Cryptography

@ JPMorgan Chase & Co. | India