all InfoSec news
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. (arXiv:2204.00032v1 [cs.CR])
April 4, 2022, 1:20 a.m. | Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini
cs.CR updates on arXiv.org arxiv.org
We introduce a new class of attacks on machine learning models. We show that
an adversary who can poison a training dataset can cause models trained on this
dataset to leak significant private details of training points belonging to
other parties. Our active inference attacks connect two independent lines of
work targeting the integrity and privacy of machine learning training data.
Our attacks are effective across membership inference, attribute inference,
and data extraction. For example, our targeted attacks can poison …
machine machine learning machine learning models poisoning secrets truth
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Engineer (SPLUNK) | Remote US
@ Coalfire | United States
Cyber - AppSec - Web PT2
@ KPMG India | Bengaluru, Karnataka, India
Ingénieur consultant expérimenté en Risques Industriels - Etude de dangers, QRA (F-H-X)
@ Bureau Veritas Group | COURBEVOIE, Ile-de-France, FR
Malware Intern
@ SentinelOne | Bengaluru, Karnataka, India