Web: http://arxiv.org/abs/2209.07341

Sept. 16, 2022, 1:20 a.m. | Dominik Hintersdorf, Lukas Struppek, Kristian Kersting

cs.CR updates on arXiv.org arxiv.org

As deep learning is now used in many real-world applications, research has
focused increasingly on the privacy of deep learning models and how to prevent
attackers from obtaining sensitive information about the training data.
However, image-text models like CLIP have not yet been looked at in the context
of privacy attacks. While membership inference attacks aim to tell whether a
specific data point was used for training, we introduce a new type of privacy
attack, named identity inference attack (IDIA), …

attacks identity machine machine learning machine learning models privacy

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Senior Professional Services Consultant I

@ Palo Alto Networks | New York City, United States

Senior Consultant, Security Research Services (Security Research Services (Unit 42) - Remote

@ Palo Alto Networks | Santa Clara, CA, United States

Software Architect – Endpoint Security

@ Zscaler | San Jose, CA, United States

Chief Information Security Officer H/F

@ AccorCorpo | Évry-Courcouronnes, France

Director of Security Engineering & Compliance

@ TaxBit | Washington, District of Columbia, United States

Principal, Product Security Architect

@ Western Digital | San Jose, CA, United States

IT Security Lead Consultant

@ Devoteam | Praha 1, Czech republic