April 25, 2024, 7:11 p.m. | Vidit Khazanchi, Pavan Kulkarni, Yuvaraj Govindarajulu, Manojkumar Parmar

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.15656v1 Announce Type: cross
Abstract: Emerging vulnerabilities in machine learning (ML) models due to adversarial attacks raise concerns about their reliability. Specifically, evasion attacks manipulate models by introducing precise perturbations to input data, causing erroneous predictions. To address this, we propose a methodology combining SHapley Additive exPlanations (SHAP) for feature importance analysis with an innovative Optimal Epsilon technique for conducting evasion attacks. Our approach begins with SHAP-based analysis to understand model vulnerabilities, crucial for devising targeted evasion strategies. The Optimal …

address adversarial adversarial attacks arxiv attack attacks cs.cr cs.lg data deception emerging evasion evasion attacks features input machine machine learning methodology predictions reliability vulnerabilities

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité - Nantes

@ Hifield | Saint-Herblain, France

L2 Security - Senior Security Engineer

@ Paytm | Noida, Uttar Pradesh

GRC Integrity Program Manager

@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City

Consultant Active Directory H/F

@ Hifield | Sèvres, France

Consultant PCI-DSS H/F

@ Hifield | Sèvres, France

Head of Security Operations

@ Canonical Ltd. | Home based - Americas, EMEA