March 8, 2024, 5:11 a.m. | Olukorede Fakorede, Modeste Atsague, Jin Tian

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.04070v1 Announce Type: cross
Abstract: Adversarial Training (AT) effectively improves the robustness of Deep Neural Networks (DNNs) to adversarial attacks. Generally, AT involves training DNN models with adversarial examples obtained within a pre-defined, fixed perturbation bound. Notably, individual natural examples from which these adversarial examples are crafted exhibit varying degrees of intrinsic vulnerabilities, and as such, crafting adversarial examples with fixed perturbation radius for all instances may not sufficiently unleash the potency of AT. Motivated by this observation, we propose …

adversarial adversarial attacks arxiv attacks aware budget cs.ai cs.cr cs.cv cs.lg defined effectively examples natural networks neural networks robustness training vulnerabilities vulnerability

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC