Web: http://arxiv.org/abs/2204.11837

April 27, 2022, 1:20 a.m. | Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks hamper the functionality and accuracy of Deep Neural
Networks (DNNs) by meddling with subtle perturbations to their inputs.In this
work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to
mitigate the negative effect from adversarial attacks. To be precise, our
method promotes the robustness of a DNN by randomly masking a portion of
potential adversarial images, and as a result, the %classification result
output of the DNN becomes more tolerant to minor input perturbations. Compared …

adversarial defense lg

More from arxiv.org / cs.CR updates on arXiv.org

Security Analyst

@ Storable | Missouri, United States

Artificial Intelligence and Cybersecurity Researcher

@ NavInfo Europe BV | Eindhoven, Netherlands

Senior Security Engineer (E5) - Infrastructure Security

@ Netflix | Remote, United States

Sr. Security Engineer (Infrastructure)

@ SpaceX | Hawthorne, CA or Redmond, WA or Washington, DC

Senior Global Security Compliance Analyst

@ Snowflake Inc. | Warsaw, Poland

Staff Security Engineer, Threat Hunt & Research (L4)

@ Twilio | Remote - Ireland

Junior Cybersecurity Engineer

@ KUDO | Buenos Aires

iOS Engineer (hybrid / flexibility / cybersecurity)

@ Qustodio | Barcelona, Spain

Security Engineer

@ Binance.US | U.S. Remote

Senior Information Systems Security Officer (ISSO)

@ Sigma Defense | Fayetteville, North Carolina, United States

ATGPAC Battle Lab - Ballistic Missile Defense Commander/Operations Manager

@ Sigma Defense | San Diego, California, United States

Cyber Security - Head of Infrastructure m/f

@ DataDome | Paris