all InfoSec news
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
April 2, 2024, 7:11 p.m. | Shanglun Feng, Florian Tram\`er
cs.CR updates on arXiv.org arxiv.org
Abstract: Practitioners commonly download pretrained machine learning models from open repositories and finetune them to fit specific applications. We show that this practice introduces a new risk of privacy backdoors. By tampering with a pretrained model's weights, an attacker can fully compromise the privacy of the finetuning data. We show how to build privacy backdoors for a variety of models, including transformers, which enable an attacker to reconstruct individual finetuning samples, with a guaranteed success! We …
applications arxiv attacker backdoors can compromise cs.cr cs.lg data download finetuning machine machine learning machine learning models practice privacy repositories risk stealing tampering
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Senior - Penetration Tester
@ Deloitte | Madrid, España
Associate Cyber Incident Responder
@ Highmark Health | PA, Working at Home - Pennsylvania
Senior Insider Threat Analyst
@ IT Concepts Inc. | Woodlawn, Maryland, United States