Nov. 4, 2022, 1:20 a.m. | Gaurav Kumar Nayak, Inder Khatri, Shubham Randive, Ruchit Rawal, Anirban Chakraborty

cs.CR updates on arXiv.org arxiv.org

Several companies often safeguard their trained deep models (i.e. details of
architecture, learnt weights, training details etc.) from third-party users by
exposing them only as black boxes through APIs. Moreover, they may not even
provide access to the training data due to proprietary reasons or sensitivity
concerns. We make the first attempt to provide adversarial robustness to the
black box models in a data-free set up. We construct synthetic data via
generative model and train surrogate network using model stealing …

adversarial attacks black box box data defense free

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Deputy Chief Information Security Officer

@ City of Philadelphia | Philadelphia, PA, United States

Global Cybersecurity Expert

@ CMA CGM | Mumbai, IN

Senior Security Operations Engineer

@ EarnIn | Mexico

Cyber Technologist (Sales Engineer)

@ Darktrace | London