July 21, 2022, 1:20 a.m. | Keiichiro Yamamura, Haruki Sato, Nariaki Tateiwa, Nozomi Hata, Toru Mitsutake, Issa Oe, Hiroki Ishikura, Katsuki Fujisawa

cs.CR updates on arXiv.org arxiv.org

Deep learning models are vulnerable to adversarial examples, and adversarial
attacks used to generate such examples have attracted considerable research
interest. Although existing methods based on the steepest descent have achieved
high attack success rates, ill-conditioned problems occasionally reduce their
performance. To address this limitation, we utilize the conjugate gradient (CG)
method, which is effective for this type of problem, and propose a novel attack
algorithm inspired by the CG method, named the Auto Conjugate Gradient (ACG)
attack. The results …

adversarial attacks lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior Manager, Security Compliance (Customer Trust)

@ Box | Tokyo

Cyber Security Engineering Specialist

@ SITEC Consulting | St. Louis, MO, USA 63101

Technical Security Analyst

@ Spire Healthcare | United Kingdom

Embedded Threat Intelligence Team Account Manager

@ Sibylline Ltd | Austin, Texas, United States

Bank Protection Security Officer

@ Allied Universal | Portland, OR, United States