June 8, 2022, 1:20 a.m. | Huiyu Li, Nicholas Ayache, Hervé Delingette

cs.CR updates on arXiv.org arxiv.org

In privacy-preserving machine learning, it is common that the owner of the
learned model does not have any physical access to the data. Instead, only a
secured remote access to a data lake is granted to the model owner without any
ability to retrieve data from the data lake. Yet, the model owner may want to
export the trained model periodically from the remote repository and a question
arises whether this may cause is a risk of data leakage. In …

attack data data lakes export images medical networks safe stealing

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Officer Level 1 (L1)

@ NTT DATA | Virginia, United States of America

Alternance - Analyste VOC - Cybersécurité - Île-De-France

@ Sopra Steria | Courbevoie, France

Senior Security Researcher, SIEM

@ Huntress | Remote US or Remote CAN

Cyber Security Engineer Lead

@ ASSYSTEM | Bridgwater, United Kingdom