March 29, 2024, 4:11 a.m. | Gaurav Kumar Nayak, Inder Khatri, Ruchit Rawal, Anirban Chakraborty

cs.CR updates on arXiv.org arxiv.org

arXiv:2211.01579v3 Announce Type: replace-cross
Abstract: Several companies often safeguard their trained deep models (i.e., details of architecture, learnt weights, training details etc.) from third-party users by exposing them only as black boxes through APIs. Moreover, they may not even provide access to the training data due to proprietary reasons or sensitivity concerns. In this work, we propose a novel defense mechanism for black box models against adversarial attacks in a data-free set up. We construct synthetic data via generative model …

adversarial adversarial attacks arxiv attacks black box box cs.cr cs.cv cs.lg data defense free

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior InfoSec Manager - Risk and Compliance

@ Federal Reserve System | Remote - Virginia

Security Analyst

@ Fortra | Mexico

Incident Responder

@ Babcock | Chester, GB, CH1 6ER

Vulnerability, Access & Inclusion Lead

@ Monzo | Cardiff, London or Remote (UK)

Information Security Analyst

@ Unissant | MD, USA