all InfoSec news
Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks. (arXiv:2202.02242v1 [cs.CR])
Feb. 7, 2022, 2:20 a.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet
cs.CR updates on arXiv.org arxiv.org
Machine learning (ML) models have been deployed for high-stakes applications.
Due to class imbalance in the sensitive attribute observed in the datasets, ML
models are unfair on minority subgroups identified by a sensitive attribute,
such as race and sex. In-processing fairness algorithms ensure model
predictions are independent of sensitive attribute. Furthermore, ML models are
vulnerable to attribute inference attacks where an adversary can identify the
values of sensitive attribute by exploiting their distinguishable model
predictions. Despite privacy and fairness being …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Systems Security Officer (ISSO) (Remote within HR Virginia area)
@ OneZero Solutions | Portsmouth, VA, USA
Security Analyst
@ UNDP | Tripoli (LBY), Libya
Senior Incident Response Consultant
@ Google | United Kingdom
Product Manager II, Threat Intelligence, Google Cloud
@ Google | Austin, TX, USA; Reston, VA, USA
Cloud Security Analyst
@ Cloud Peritus | Bengaluru, India