June 29, 2022, 1:20 a.m. | Ehsan Amid, Om Thakkar, Arun Narayanan, Rajiv Mathews, Françoise Beaufays

cs.CR updates on arXiv.org arxiv.org

Recent work has designed methods to demonstrate that model updates in ASR
training can leak potentially sensitive attributes of the utterances used in
computing the updates. In this work, we design the first method to demonstrate
information leakage about training data from trained ASR models. We design
Noise Masking, a fill-in-the-blank style method for extracting targeted parts
of training data from trained ASR models. We demonstrate the success of Noise
Masking by using it in four settings for extracting names …

asr data training

Sr. Cloud Security Engineer

@ BLOCKCHAINS | USA - Remote

Network Security (SDWAN: Velocloud) Infrastructure Lead

@ Sopra Steria | Noida, Uttar Pradesh, India

Senior Python Engineer, Cloud Security

@ Darktrace | Cambridge

Senior Security Consultant

@ Nokia | United States

Manager, Threat Operations

@ Ivanti | United States, Remote

Lead Cybersecurity Architect - Threat Modeling | AWS Cloud Security

@ JPMorgan Chase & Co. | Columbus, OH, United States