all InfoSec news
Introducing Model Inversion Attacks on Automatic Speaker Recognition. (arXiv:2301.03206v1 [cs.SD])
cs.CR updates on arXiv.org arxiv.org
Model inversion (MI) attacks allow to reconstruct average per-class
representations of a machine learning (ML) model's training data. It has been
shown that in scenarios where each class corresponds to a different individual,
such as face classifiers, this represents a severe privacy risk. In this work,
we explore a new application for MI: the extraction of speakers' voices from a
speaker recognition system. We present an approach to (1) reconstruct audio
samples from a trained ML model and (2) extract …
application attacks automatic class data machine machine learning privacy privacy risk recognition risk speakers system training work