March 7, 2024, 5:11 a.m. | Yun Lu, Malik Magdon-Ismail, Yu Wei, Vassilis Zikas

cs.CR updates on arXiv.org arxiv.org

arXiv:2309.01243v2 Announce Type: replace
Abstract: To achieve differential privacy (DP) one typically randomizes the output of the underlying query. In big data analytics, one often uses randomized sketching/aggregation algorithms to make processing high-dimensional data tractable. Intuitively, such machine learning (ML) algorithms should provide some inherent privacy, yet most if not all existing DP mechanisms do not leverage this inherent randomness, resulting in potentially redundant noising.
The motivating question of our work is:
(How) can we improve the utility of DP …

aggregation algorithms analytics application arxiv big big data big data analytics cs.cr cs.lg data data analytics differential privacy distributions high machine machine learning normal privacy query spectrum

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Senior Software Engineer, Security

@ Niantic | Zürich, Switzerland

Consultant expert en sécurité des systèmes industriels (H/F)

@ Devoteam | Levallois-Perret, France

Cybersecurity Analyst

@ Bally's | Providence, Rhode Island, United States

Digital Trust Cyber Defense Executive

@ KPMG India | Gurgaon, Haryana, India

Program Manager - Cybersecurity Assessment Services

@ TestPros | Remote (and DMV), DC