all InfoSec news
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. (arXiv:2205.06504v1 [cs.CR])
May 16, 2022, 1:20 a.m. | Yongjie Wang, Hangwei Qian, Chunyan Miao
cs.CR updates on arXiv.org arxiv.org
Cloud service providers have launched Machine-Learning-as-a-Service (MLaaS)
platforms to allow users to access large-scale cloudbased models via APIs. In
addition to prediction outputs, these APIs can also provide other information
in a more human-understandable way, such as counterfactual explanations (CF).
However, such extra information inevitably causes the cloud models to be more
vulnerable to extraction attacks which aim to steal the internal functionality
of models in the cloud. Due to the black-box nature of cloud models, however, a
vast number …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 3 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 3 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 3 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 3 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS