July 21, 2022, 1:20 a.m. | Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

cs.CR updates on arXiv.org arxiv.org

Pre-trained encoders are general-purpose feature extractors that can be used
for many downstream tasks. Recent progress in self-supervised learning can
pre-train highly effective encoders using a large volume of unlabeled data,
leading to the emerging encoder as a service (EaaS). A pre-trained encoder may
be deemed confidential because its training requires lots of data and
computation resources as well as its public release may facilitate misuse of
AI, e.g., for deepfakes generation. In this paper, we propose the first attack …

stealing

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Staff DFIR Investigator

@ SentinelOne | United States - Remote

Senior Consultant.e (H/F) - Product & Industrial Cybersecurity

@ Wavestone | Puteaux, France

Information Security Analyst

@ StarCompliance | York, United Kingdom, Hybrid

Senior Cyber Security Analyst (IAM)

@ New York Power Authority | White Plains, US