Feb. 23, 2024, 5:11 a.m. | Giovanni Cherubin, Boris K\"opf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-B\'eguelin

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.14397v1 Announce Type: new
Abstract: Machine learning models trained with differentially-private (DP) algorithms such as DP-SGD enjoy resilience against a wide range of privacy attacks. Although it is possible to derive bounds for some attacks based solely on an $(\varepsilon,\delta)$-DP guarantee, meaningful bounds require a small enough privacy budget (i.e., injecting a large amount of noise), which results in a large loss in utility. This paper presents a new approach to evaluate the privacy of machine learning models against specific …

algorithms arxiv attacks budget cs.cr cs.lg delta guarantee large machine machine learning machine learning models meaningful privacy private record resilience

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cyber Security Culture – Communication and Content Specialist

@ H&M Group | Stockholm, Sweden

Container Hardening, Sr. (Remote | Top Secret)

@ Rackner | San Antonio, TX

GRC and Information Security Analyst

@ Intertek | United States

Information Security Officer

@ Sopra Steria | Bristol, United Kingdom

Casual Area Security Officer South Down Area

@ TSS | County Down, United Kingdom