Nov. 21, 2022, 2:20 a.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet

cs.CR updates on arXiv.org arxiv.org

Machine learning (ML) models have been deployed for high-stakes applications,
e.g., healthcare and criminal justice. Prior work has shown that ML models are
vulnerable to attribute inference attacks where an adversary, with some
background knowledge, trains an ML attack model to infer sensitive attributes
by exploiting distinguishable model predictions. However, some prior attribute
inference attacks have strong assumptions about adversary's background
knowledge (e.g., marginal distribution of sensitive attribute) and pose no more
privacy risk than statistical inference. Moreover, none of …

attacks fairness

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Ford Pro Tech and FCSD Tech – Product Manager, Cyber Security

@ Ford Motor Company | Chennai, Tamil Nadu, India

Cloud Data Encryption and Cryptography Automation Expert

@ Ford Motor Company | Chennai, Tamil Nadu, India

SecOps Analyst

@ Atheneum | Berlin, Berlin, Germany

Consulting Director, Cloud Security, Proactive Services (Unit 42)

@ Palo Alto Networks | Santa Clara, CA, United States