Feb. 8, 2024, 5:10 a.m. | Dominik Hintersdorf Lukas Struppek Daniel Neider Kristian Kersting

cs.CR updates on arXiv.org arxiv.org

The proliferation of large AI models trained on uncurated, often sensitive web-scraped data has raised significant privacy concerns. One of the concerns is that adversaries can extract information about the training data using privacy attacks. Unfortunately, the task of removing specific information from the models without sacrificing performance is not straightforward and has proven to be challenging. We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names and faces of individuals …

adversaries ai models attacks backdoors can cs.cl cs.cr cs.cv cs.lg data defending extract information large performance privacy privacy concerns proliferation scraped sensitive task training training data web

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens