March 29, 2024, 4:11 a.m. | Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty

cs.CR updates on arXiv.org arxiv.org

arXiv:2211.01579v3 Announce Type: replace-cross
Abstract: Several companies often safeguard their trained deep models (i.e., details of architecture, learnt weights, training details etc.) from third-party users by exposing them only as black boxes through APIs. Moreover, they may not even provide access to the training data due to proprietary reasons or sensitivity concerns. In this work, we propose a novel defense mechanism for black box models against adversarial attacks in a data-free set up. We construct synthetic data via generative model …

adversarial adversarial attacks arxiv attacks black box box cs.cr cs.cv cs.lg data defense free

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC