Dec. 1, 2022, 2:10 a.m. | Tooba Khan, Kumar Madhukar, Subodh Vishnu Sharma

cs.CR updates on arXiv.org arxiv.org

The adversarial input generation problem has become central in establishing
the robustness and trustworthiness of deep neural nets, especially when they
are used in safety-critical application domains such as autonomous vehicles and
precision medicine. This is also practically challenging for multiple
reasons-scalability is a common issue owing to large-sized networks, and the
generated adversarial inputs often lack important qualities such as naturalness
and output-impartiality. We relate this problem to the task of patching neural
nets, i.e. applying small changes in …

adversarial input patching

Red Team Penetration Tester and Operator, Junior

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)

Director, Security Operations & Risk Management

@ Live Nation Entertainment | Toronto, ON

IT and Security Specialist APAC (F/M/D)

@ Flowdesk | Singapore, Singapore, Singapore

Senior Security Controls Assessor

@ Capgemini | Washington, DC, District of Columbia, United States; McLean, Virginia, United States

GRC Systems Solution Architect

@ Deloitte | Midrand, South Africa

Cybersecurity Subject Matter Expert (SME)

@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States