all InfoSec news
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
April 8, 2024, 4:11 a.m. | Trilokesh Ranjan Sarkar, Nilanjan Das, Pralay Sankar Maitra, Bijoy Some, Ritwik Saha, Orijita Adhikary, Bishal Bose, Jaydip Sen
cs.CR updates on arXiv.org arxiv.org
Abstract: This technical report delves into an in-depth exploration of adversarial attacks specifically targeted at Deep Neural Networks (DNNs) utilized for image classification. The study also investigates defense mechanisms aimed at bolstering the robustness of machine learning models. The research focuses on comprehending the ramifications of two prominent attack methodologies: the Fast Gradient Sign Method (FGSM) and the Carlini-Wagner (CW) approach. These attacks are examined concerning three pre-trained image classifiers: Resnext50_32x4d, DenseNet-201, and VGG-19, utilizing the …
adversarial adversarial attacks arxiv attacks classification cs.cr cs.cv cs.lg defense exploration image machine machine learning machine learning models mechanism networks neural networks report robustness role study technical wagner
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Cyber Security Culture – Communication and Content Specialist
@ H&M Group | Stockholm, Sweden
Container Hardening, Sr. (Remote | Top Secret)
@ Rackner | San Antonio, TX
GRC and Information Security Analyst
@ Intertek | United States
Information Security Officer
@ Sopra Steria | Bristol, United Kingdom
Casual Area Security Officer South Down Area
@ TSS | County Down, United Kingdom