March 14, 2024, 4:11 a.m. | Ana-Maria Cretu, Daniel Jones, Yves-Alexandre de Montjoye, Shruti Tople

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.05093v2 Announce Type: replace
Abstract: Machine learning models have been shown to leak sensitive information about their training datasets. Models are increasingly deployed on devices, raising concerns that white-box access to the model parameters increases the attack surface compared to black-box access which only provides query access. Directly extending the shadow modelling technique from the black-box to the white-box setting has been shown, in general, not to perform better than black-box only attacks. A potential reason is misalignment, a known …

access arxiv attack attack surface box cs.cr cs.lg datasets devices effect information leak machine machine learning machine learning models privacy query sensitive sensitive information training

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Sr. Staff Firmware Engineer – Networking & Firewall

@ Axiado | Bengaluru, India

Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)

@ SAP | Walldorf, DE, 69190

SAP Security Administrator

@ FARO Technologies | EMEA-Portugal