Aug. 26, 2022, 1:20 a.m. | Xinyi Wang, Simon Yusuf Enoch, Dong Seong Kim

cs.CR updates on arXiv.org arxiv.org

Widely used deep learning models are found to have poor robustness. Little
noises can fool state-of-the-art models into making incorrect predictions.
While there is a great deal of high-performance attack generation methods, most
of them directly add perturbations to original data and measure them using L_p
norms; this can break the major structure of data, thus, creating invalid
attacks. In this paper, we propose a black-box attack, which, instead of
modifying original data, modifies latent features of data extracted by …

adversarial algorithm attack lg

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote