all InfoSec news
Privacy Implications of Explainable AI in Data-Driven Systems
June 25, 2024, 4:20 a.m. | Fatima Ezzeddine
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine learning (ML) models, demonstrably powerful, suffer from a lack of interpretability. The absence of transparency, often referred to as the black box nature of ML models, undermines trust and urges the need for efforts to enhance their explainability. Explainable AI (XAI) techniques address this challenge by providing frameworks and methods to explain the internal decision-making processes of these complex models. Techniques like Counterfactual Explanations (CF) and Feature Importance play a crucial role in achieving …
address arxiv black box box challenge cs.ai cs.cr cs.lg data data-driven explainability explainable ai machine machine learning ml models nature privacy systems techniques transparency trust xai
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Data Loss Prevention Analyst 1
@ Advanced Energy | Quezon City, 00, PH, n/a
TC-CS-DPP MS Purview-Staff
@ EY | Bengaluru, KA, IN, 560048
Consultant CSIRT Confirmé H/F (Paris)
@ EY | Paris La Défense, FR, 92037
Consultant Azure Cloud Sécurité CSPM H/F (Paris)
@ EY | Paris La Défense, FR, 92037
Consultant en Protection des Données (Microsoft Purview) H/F (Paris)
@ EY | Paris La Défense, FR, 92037
Business Continuity Coordinator
@ Sumitomo Mitsui Banking Corporation | Brea, CA, US, 92821