Jan. 10, 2023, 2:10 a.m. | Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger

cs.CR updates on arXiv.org arxiv.org

Model inversion (MI) attacks allow to reconstruct average per-class
representations of a machine learning (ML) model's training data. It has been
shown that in scenarios where each class corresponds to a different individual,
such as face classifiers, this represents a severe privacy risk. In this work,
we explore a new application for MI: the extraction of speakers' voices from a
speaker recognition system. We present an approach to (1) reconstruct audio
samples from a trained ML model and (2) extract …

application attacks automatic class data machine machine learning privacy privacy risk recognition risk speakers system training work

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Cyber Incident Manager 3

@ ARSIEM | Pensacola, FL

On-Site Environmental Technician II - Industrial Wastewater Plant Operator and Compliance Inspector

@ AECOM | Billings, MT, United States

Sr Security Analyst

@ Everbridge | Bengaluru