all InfoSec news
Disparate Impact in Differential Privacy from Gradient Misalignment. (arXiv:2206.07737v1 [cs.LG])
June 17, 2022, 1:20 a.m. | Maria S. Esipova, Atiyeh Ashari Ghomi, Yaqiao Luo, Jesse C. Cresswell
cs.CR updates on arXiv.org arxiv.org
As machine learning becomes more widespread throughout society, aspects
including data privacy and fairness must be carefully considered, and are
crucial for deployment in highly regulated industries. Unfortunately, the
application of privacy enhancing technologies can worsen unfair tendencies in
models. In particular, one of the most widely used techniques for private model
training, differentially private stochastic gradient descent (DPSGD),
frequently intensifies disparate impact on groups within data. In this work we
study the fine-grained causes of unfairness in DPSGD and …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Ford Pro Tech and FCSD Tech – Product Manager, Cyber Security
@ Ford Motor Company | Chennai, Tamil Nadu, India
Cloud Data Encryption and Cryptography Automation Expert
@ Ford Motor Company | Chennai, Tamil Nadu, India
SecOps Analyst
@ Atheneum | Berlin, Berlin, Germany
Consulting Director, Cloud Security, Proactive Services (Unit 42)
@ Palo Alto Networks | Santa Clara, CA, United States