Aug. 9, 2023, 1:10 a.m. | Catherine Huang, Chelse Swoopes, Christina Xiao, Jiaqi Ma, Himabindu Lakkaraju

cs.CR updates on arXiv.org arxiv.org

Machine learning models are increasingly utilized across impactful domains to
predict individual outcomes. As such, many models provide algorithmic recourse
to individuals who receive negative outcomes. However, recourse can be
leveraged by adversaries to disclose private information. This work presents
the first attempt at mitigating such attacks. We present two novel methods to
generate differentially private recourse: Differentially Private Model (DPM)
and Laplace Recourse (LR). Using logistic regression classifiers and real world
and synthetic datasets, we find that DPM and …

adversaries data data leakage domains information machine machine learning machine learning models outcomes predict private training work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Associate Manager, BPT Infrastructure & Ops (Security Engineer)

@ SC Johnson | PHL - Makati

Cybersecurity Analyst - Project Bound

@ NextEra Energy | Jupiter, FL, US, 33478

Lead Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts

Junior Information Security Coordinator (Internship)

@ Garrison Technology | London, Waterloo, England, United Kingdom

Sr. Security Engineer

@ ScienceLogic | Reston, VA