July 14, 2022, 1:20 a.m. | Dingfan Chen, Ning Yu, Mario Fritz

cs.CR updates on arXiv.org arxiv.org

As a long-term threat to the privacy of training data, membership inference
attacks (MIAs) emerge ubiquitously in machine learning models. Existing works
evidence strong connection between the distinguishability of the training and
testing loss distributions and the model's vulnerability to MIAs. Motivated by
existing results, we propose a novel training framework based on a relaxed loss
with a more achievable learning target, which leads to narrowed generalization
gap and reduced privacy leakage. RelaxLoss is applicable to any classification
model with …

attacks lg utility

Information Security Engineers

@ D. E. Shaw Research | New York City

GG9b-Assoc Eng II, Services

@ HARMAN International | IN Bengaluru EOIZ Indust Area Campus HCS

Lead Security Operations Engineer

@ S&P Global | US - NY New York City - 55 WATER ST 35 HRS

Information Systems Security Manager (ISSM)

@ STR | Arlington, VA

Sr. Site Reliability Engineer - Incident Response

@ HashiCorp | India - Bengaluru

Function Cluster Architect Product Security

@ ASML | Veldhoven, Building 03, Netherlands