July 27, 2023, 1:10 a.m. | Mohammad Hoseinpour, Milad Hoseinpour, Ali Aghagolzadeh

cs.CR updates on arXiv.org arxiv.org

Machine learning (ML) models can memorize training datasets. As a result,
training ML models over private datasets can violate the privacy of
individuals. Differential privacy (DP) is a rigorous privacy notion to preserve
the privacy of underlying training datasets in ML models. Yet, training ML
models in a DP framework usually degrades the accuracy of ML models. This paper
aims to boost the accuracy of a DP-ML model, specifically a logistic regression
model, via a pre-training module. In more detail, …

accuracy amplification datasets differential privacy machine machine learning ml models privacy private result training

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Offensive Security Engineer

@ Ivanti | United States, Remote

Senior Security Engineer I

@ Samsara | Remote - US

Senior Principal Information System Security Engineer

@ Chameleon Consulting Group | Herndon, VA

Junior Detections Engineer

@ Kandji | San Francisco

Data Security Engineer/ Architect - Remote United States

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700