all InfoSec news
Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks
May 9, 2024, 4:12 a.m. | Haonan Shi, Tu Ouyang, An Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine learning models, in particular deep neural networks, are currently an integral part of various applications, from healthcare to finance. However, using sensitive data to train these models raises concerns about privacy and security. One method that has emerged to verify if the trained models are privacy-preserving is Membership Inference Attacks (MIA), which allows adversaries to determine whether a specific data point was part of a model's training dataset. While a series of MIAs have …
applications arxiv attacks cs.ai cs.cr cs.lg data finance healthcare machine machine learning machine learning models networks neural networks privacy privacy and security security sensitive sensitive data train verify
More from arxiv.org / cs.CR updates on arXiv.org
A Privacy Preserving System for Movie Recommendations Using Federated Learning
2 days, 23 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Computer and Forensics Investigator
@ ManTech | 221BQ - Cstmr Site,Springfield,VA
Senior Security Analyst
@ Oracle | United States
Associate Vulnerability Management Specialist
@ Diebold Nixdorf | Hyderabad, Telangana, India