March 23, 2023, 1:10 a.m. | Yumeki Goto, Nami Ashizawa, Toshiki Shibahara, Naoto Yanai

cs.CR updates on arXiv.org arxiv.org

When an adversary provides poison samples to a machine learning model,
privacy leakage, such as membership inference attacks that infer whether a
sample was included in the training of the model, becomes effective by moving
the sample to an outlier. However, the attacks can be detected because
inference accuracy deteriorates due to poison samples. In this paper, we
discuss a \textit{backdoor-assisted membership inference attack}, a novel
membership inference attack based on backdoors that return the adversary's
expected output for a …

accuracy adversary attack attacks backdoor backdoors discuss machine machine learning moving novel privacy training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cyber Security Architect - SR

@ ERCOT | Taylor, TX

SOC Analyst

@ Wix | Tel Aviv, Israel

Associate Director, SIEM & Detection Engineering(remote)

@ Humana | Remote US

Senior DevSecOps Architect

@ Computacenter | Birmingham, GB, B37 7YS