Feb. 9, 2024, 5:10 a.m. | Zhenlong Liu Lei Feng Huiping Zhuang Xiaofeng Cao Hongxin Wei

cs.CR updates on arXiv.org arxiv.org

Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss …

aim attacks cs.cr cs.lg data loss machine machine learning machine learning models may privacy privacy risk reverse risk sample training training data work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cyber Security Culture – Communication and Content Specialist

@ H&M Group | Stockholm, Sweden

Container Hardening, Sr. (Remote | Top Secret)

@ Rackner | San Antonio, TX

GRC and Information Security Analyst

@ Intertek | United States

Information Security Officer

@ Sopra Steria | Bristol, United Kingdom

Casual Area Security Officer South Down Area

@ TSS | County Down, United Kingdom