Aug. 19, 2022, 1:20 a.m. | Hung-Jui Wang, Yu-Yu Wu, Shang-Tse Chen

cs.CR updates on arXiv.org arxiv.org

Malicious attackers can generate targeted adversarial examples by imposing
human-imperceptible noise on images, forcing neural network models to produce
specific incorrect outputs. With cross-model transferable adversarial examples,
the vulnerability of neural networks remains even if the model information is
kept secret from the attacker. Recent studies have shown the effectiveness of
ensemble-based methods in generating transferable adversarial examples.
However, existing methods fall short under the more challenging scenario of
creating targeted attacks transferable among distinct models. In this work, we …

attack targeted attack

Red Team Penetration Tester and Operator, Junior

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)

Director, Security Operations & Risk Management

@ Live Nation Entertainment | Toronto, ON

IT and Security Specialist APAC (F/M/D)

@ Flowdesk | Singapore, Singapore, Singapore

Senior Security Controls Assessor

@ Capgemini | Washington, DC, District of Columbia, United States; McLean, Virginia, United States

GRC Systems Solution Architect

@ Deloitte | Midrand, South Africa

Cybersecurity Subject Matter Expert (SME)

@ SMS Data Products Group, Inc. | Fort Belvoir, VA, United States