June 29, 2022, 1:20 a.m. | Mantas Mazeika, Bo Li, David Forsyth

cs.CR updates on arXiv.org arxiv.org

Model stealing attacks present a dilemma for public machine learning APIs. To
protect financial investments, companies may be forced to withhold important
information about their models that could facilitate theft, including
uncertainty estimates and prediction explanations. This compromise is harmful
not only to users but also to external transparency. Model stealing defenses
seek to resolve this dilemma by making models harder to steal while preserving
utility for benign users. However, existing defenses have poor performance in
practice, either requiring enormous …

adversary lg stealing

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Intermediate Security Engineer, (Incident Response, Trust & Safety)

@ GitLab | Remote, US

Journeyman Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Project Manager II - Compliance

@ Critical Path Institute | Tucson, AZ, USA

Junior System Engineer (m/w/d) Cyber Security 1

@ Deutsche Telekom | Leipzig, Deutschland