May 6, 2024, 4:11 a.m. | Firuz Juraev, Mohammed Abuhamad, Eric Chan-Tin, George K. Thiruvathukal, Tamer Abuhmed

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.01963v1 Announce Type: new
Abstract: Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of …

adversarial applications arxiv attack box can compromise cs.ai cs.cr cs.cv cs.lg deep learning defense human insights point safety security security measures serious settings threat undetectable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France