Dec. 1, 2022, 2:10 a.m. | Ana-Maria Creţu, Florent Guépin, Yves-Alexandre de Montjoye

cs.CR updates on arXiv.org arxiv.org

Machine learning models are often trained on sensitive and proprietary
datasets. Yet what -- and under which conditions -- a model leaks about its
dataset, is not well understood. Most previous works study the leakage of
information about an individual record. Yet in many situations, global dataset
information such as its underlying distribution, e.g. $k$-way marginals or
correlations are similarly sensitive or secret. We here explore for the first
time whether a model leaks information about the correlations between the …

attacks correlation machine machine learning machine learning models

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium