all InfoSec news
CLIPping Privacy: Identity Inference Attacks on Multi-Modal Machine Learning Models. (arXiv:2209.07341v1 [cs.LG])
Sept. 16, 2022, 1:20 a.m. | Dominik Hintersdorf, Lukas Struppek, Kristian Kersting
cs.CR updates on arXiv.org arxiv.org
As deep learning is now used in many real-world applications, research has
focused increasingly on the privacy of deep learning models and how to prevent
attackers from obtaining sensitive information about the training data.
However, image-text models like CLIP have not yet been looked at in the context
of privacy attacks. While membership inference attacks aim to tell whether a
specific data point was used for training, we introduce a new type of privacy
attack, named identity inference attack (IDIA), …
attacks identity machine machine learning machine learning models privacy
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer 2
@ Oracle | BENGALURU, KARNATAKA, India
Oracle EBS DevSecOps Developer
@ Accenture Federal Services | Arlington, VA
Information Security GRC Specialist - Risk Program Lead
@ Western Digital | Irvine, CA, United States
Senior Cyber Operations Planner (15.09)
@ OCT Consulting, LLC | Washington, District of Columbia, United States
AI Cybersecurity Architect
@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)