May 18, 2023, 1:10 a.m. | Shahab Asoodeh, Mario Diaz

cs.CR updates on arXiv.org arxiv.org

The Noisy-SGD algorithm is widely used for privately training machine
learning models. Traditional privacy analyses of this algorithm assume that the
internal state is publicly revealed, resulting in privacy loss bounds that
increase indefinitely with the number of iterations. However, recent findings
have shown that if the internal state remains hidden, then the privacy loss
might remain bounded. Nevertheless, this remarkable result heavily relies on
the assumption of (strong) convexity of the loss function. It remains an
important open problem …

algorithm converge findings internal loss losses machine machine learning machine learning models non privacy state training

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Data Privacy Manager m/f/d)

@ Coloplast | Hamburg, HH, DE

Cybersecurity Sr. Manager

@ Eastman | Kingsport, TN, US, 37660

KDN IAM Associate Consultant

@ KPMG India | Hyderabad, Telangana, India

Learning Experience Designer in Cybersecurity (f/m/div.) (Salary: ~113.000 EUR p.a.*)

@ Bosch Group | Stuttgart, Germany

Senior Security Engineer - SIEM

@ Samsara | Remote - US