all InfoSec news
When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes. (arXiv:2207.08336v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Machine learning models have demonstrated promising performance in many
areas. However, the concerns that they can be biased against specific
demographic groups hinder their adoption in high-stake applications. Thus, it
is essential to ensure fairness in machine learning models. Most previous
efforts require direct access to sensitive attributes for mitigating bias.
Nonetheless, it is often infeasible to obtain large-scale users' sensitive
attributes considering users' concerns about privacy in the data collection
process. Privacy mechanisms such as local differential privacy (LDP) …
access adoption applications attributes classification fair fairness high machine machine learning machine learning models performance privacy private