April 26, 2023, 1:10 a.m. | Bochao Liu, Pengju Wang, Shikun Li, Dan Zeng, Shiming Ge

cs.CR updates on arXiv.org arxiv.org

While massive valuable deep models trained on large-scale data have been
released to facilitate the artificial intelligence community, they may
encounter attacks in deployment which leads to privacy leakage of training
data. In this work, we propose a learning approach termed differentially
private data-free distillation (DPDFD) for model conversion that can convert a
pretrained model (teacher) into its privacy-preserving counterpart (student)
via an intermediate generator without access to training data. The learning
collaborates three parties in a unified way. First, …

access artificial artificial intelligence attacks community data deployment free generated generator intelligence intelligence community large may privacy private private data scale student synthetic synthetic data training work

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Cyber Incident Manager 3

@ ARSIEM | Pensacola, FL

On-Site Environmental Technician II - Industrial Wastewater Plant Operator and Compliance Inspector

@ AECOM | Billings, MT, United States

Sr Security Analyst

@ Everbridge | Bengaluru