April 6, 2023, 1:10 a.m. | Thibault Maho, Seyed-Mohsen Moosavi-Dezfooli, Teddy Furon

cs.CR updates on arXiv.org arxiv.org

The transferability of adversarial examples is a key issue in the security of
deep neural networks. The possibility of an adversarial example crafted for a
source model fooling another targeted model makes the threat of adversarial
attacks more realistic. Measuring transferability is a crucial problem, but the
Attack Success Rate alone does not provide a sound evaluation. This paper
proposes a new methodology for evaluating transferability by putting distortion
in a central position. This new tool shows that transferable attacks …

adversarial adversarial attacks attack attacks black box box evaluation issue key may measuring networks neural networks problem rate security sound threat tool

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Level 1 SOC Analyst

@ Telefonica Tech | Dublin, Ireland

Specialist, Database Security

@ OP Financial Group | Helsinki, FI

Senior Manager, Cyber Offensive Security

@ Edwards Lifesciences | Poland-Remote

Information System Security Officer

@ Booz Allen Hamilton | USA, AL, Huntsville (4200 Rideout Rd SW)

Senior Security Analyst - Protective Security (Open to remote across ANZ)

@ Canva | Sydney, Australia