all InfoSec news
Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks. (arXiv:2202.02242v2 [cs.CR] UPDATED)
Nov. 28, 2022, 2:10 a.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet
cs.CR updates on arXiv.org arxiv.org
Machine learning (ML) models have been deployed for high-stakes applications.
Due to class imbalance in the sensitive attribute observed in the datasets, ML
models are unfair on minority subgroups identified by a sensitive attribute,
such as race and sex. In-processing fairness algorithms ensure model
predictions are independent of sensitive attribute. Furthermore, ML models are
vulnerable to attribute inference attacks where an adversary can identify the
values of sensitive attribute by exploiting their distinguishable model
predictions. Despite privacy and fairness being …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer 2
@ Oracle | BENGALURU, KARNATAKA, India
Oracle EBS DevSecOps Developer
@ Accenture Federal Services | Arlington, VA
Information Security GRC Specialist - Risk Program Lead
@ Western Digital | Irvine, CA, United States
Senior Cyber Operations Planner (15.09)
@ OCT Consulting, LLC | Washington, District of Columbia, United States
AI Cybersecurity Architect
@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)