April 18, 2024, 4:11 a.m. | Khushnaseeb Roshan, Aasim Zafar

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.10796v1 Announce Type: new
Abstract: The rapid advancement of artificial intelligence within the realm of cybersecurity raises significant security concerns. The vulnerability of deep learning models in adversarial attacks is one of the major issues. In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase. Subsequently, it reduces the model confidence score and results in incorrect classifications. The novel key contribution of the …

advancement adversarial adversarial attacks artificial artificial intelligence arxiv attacks box cs.cr cs.lg cybersecurity deep learning intelligence machine machine learning major malicious perspective rapid realm security security concerns study try vulnerability

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France