May 13, 2024, 4:11 a.m. | Amira Guesmi, Nishant Suresh Aswani, Muhammad Shafique

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.06278v1 Announce Type: cross
Abstract: Adversarial attacks pose a significant challenge to deploying deep learning models in safety-critical applications. Maintaining model robustness while ensuring interpretability is vital for fostering trust and comprehension in these models. This study investigates the impact of Saliency-guided Training (SGT) on model robustness, a technique aimed at improving the clarity of saliency maps to deepen understanding of the model's decision-making process. Experiments were conducted on standard benchmark datasets using various deep learning architectures trained with and …

adversarial adversarial attacks applications arxiv attacks challenge critical cs.cr cs.cv deep learning impact networks neural networks robustness safety safety-critical study training trust

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC