April 2, 2024, 7:11 p.m. | Shanglun Feng, Florian Tram\`er

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.00473v1 Announce Type: new
Abstract: Practitioners commonly download pretrained machine learning models from open repositories and finetune them to fit specific applications. We show that this practice introduces a new risk of privacy backdoors. By tampering with a pretrained model's weights, an attacker can fully compromise the privacy of the finetuning data. We show how to build privacy backdoors for a variety of models, including transformers, which enable an attacker to reconstruct individual finetuning samples, with a guaranteed success! We …

applications arxiv attacker backdoors can compromise cs.cr cs.lg data download finetuning machine machine learning machine learning models practice privacy repositories risk stealing tampering

Information Security Cyber Risk Analyst

@ Intel | USA - AZ - Chandler

Senior Cloud Security Engineer (Fullstack)

@ Grab | Petaling Jaya, Malaysia

Principal Product Security Engineer

@ Oracle | United States

Cybersecurity Strategy Director

@ Proofpoint | Sunnyvale, CA

Information Security Consultant/Auditor

@ Devoteam | Lisboa, Portugal

IT Security Engineer til Netcompany IT Services

@ Netcompany | Copenhagen, Denmark