April 18, 2024, 4:11 a.m. | Khushnaseeb Roshan, Aasim Zafar

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.10796v1 Announce Type: new
Abstract: The rapid advancement of artificial intelligence within the realm of cybersecurity raises significant security concerns. The vulnerability of deep learning models in adversarial attacks is one of the major issues. In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase. Subsequently, it reduces the model confidence score and results in incorrect classifications. The novel key contribution of the …

advancement adversarial adversarial attacks artificial artificial intelligence arxiv attacks box cs.cr cs.lg cybersecurity deep learning intelligence machine machine learning major malicious perspective rapid realm security security concerns study try vulnerability

Senior Security Researcher

@ Microsoft | Redmond, Washington, United States

Sr. Cyber Risk Analyst

@ American Heart Association | Dallas, TX, United States

Cybersecurity Engineer 2/3

@ Scaled Composites, LLC | Mojave, CA, US

Information Security Operations Manager

@ DP World | Charlotte, NC, United States

Sr Cyber Security Engineer I

@ Staples | Framingham, MA, United States

Security Engineer - Heartland (Remote)

@ GuidePoint Security LLC | Remote in the US