all InfoSec news
Holding Secrets Accountable: Auditing Privacy-Preserving Machine Learning
Feb. 27, 2024, 5:11 a.m. | Hidde Lycklama, Alexander Viand, Nicolas K\"uchler, Christian Knabenhans, Anwar Hithnawi
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent advancements in privacy-preserving machine learning are paving the way to extend the benefits of ML to highly sensitive data that, until now, have been hard to utilize due to privacy concerns and regulatory constraints. Simultaneously, there is a growing emphasis on enhancing the transparency and accountability of machine learning, including the ability to audit ML deployments. While ML auditing and PPML have both been the subjects of intensive research, they have predominately been examined …
accountability arxiv auditing benefits constraints cs.cr data hard machine machine learning privacy privacy concerns regulatory secrets sensitive sensitive data transparency transparency and accountability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 16 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 16 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 16 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior Security Researcher, SIEM
@ Huntress | Remote Canada
Senior Application Security Engineer
@ Revinate | San Francisco Bay Area
Cyber Security Manager
@ American Express Global Business Travel | United States - New York - Virtual Location
Incident Responder Intern
@ Bentley Systems | Remote, PA, US
SC2024-003533 Senior Online Vulnerability Assessment Analyst (CTS) - THU 9 May
@ EMW, Inc. | Mons, Wallonia, Belgium