March 5, 2024, 3:12 p.m. | Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.02116v1 Announce Type: cross
Abstract: Machine learning (ML) is vulnerable to inference (e.g., membership inference, property inference, and data reconstruction) attacks that aim to infer the private information of training data or dataset. Existing defenses are only designed for one specific type of attack and sacrifice significant utility or are soon broken by adaptive attacks. We address these limitations by proposing an information-theoretic defense framework, called Inf2Guard, against the three major types of inference attacks. Our framework, inspired by the …

aim arxiv attack attacks cs.cr cs.lg data dataset defenses framework information machine machine learning privacy private property sacrifice training training data utility vulnerable

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Senior Software Engineer, Security

@ Niantic | Zürich, Switzerland

Consultant expert en sécurité des systèmes industriels (H/F)

@ Devoteam | Levallois-Perret, France

Cybersecurity Analyst

@ Bally's | Providence, Rhode Island, United States

Digital Trust Cyber Defense Executive

@ KPMG India | Gurgaon, Haryana, India

Program Manager - Cybersecurity Assessment Services

@ TestPros | Remote (and DMV), DC