April 10, 2024, 4:10 a.m. | Giuseppe Montalbano, Leonardo Banchi

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.05824v1 Announce Type: cross
Abstract: We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the classifier into predicting the wrong result. Nonetheless, we also show that simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks. Our results find applications in security-critical learning problems and in mitigating the effect …

adversarial adversarial attacks arxiv attacks augmentation can cs.cr cs.lg data defence hybrid input kernel machines quant-ph quantum result simple strategies support vulnerable wrong

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior - Penetration Tester

@ Deloitte | Madrid, España

Associate Cyber Incident Responder

@ Highmark Health | PA, Working at Home - Pennsylvania

Senior Insider Threat Analyst

@ IT Concepts Inc. | Woodlawn, Maryland, United States