all InfoSec news
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models. (arXiv:2209.11020v1 [cs.CV])
Sept. 23, 2022, 1:24 a.m. | Sohaib Ahmad, Benjamin Fuller, Kaleel Mahmood
cs.CR updates on arXiv.org arxiv.org
Authentication systems are vulnerable to model inversion attacks where an
adversary is able to approximate the inverse of a target machine learning
model. Biometric models are a prime candidate for this type of attack. This is
because inverting a biometric model allows the attacker to produce a realistic
biometric input to spoof biometric authentication systems.
One of the main constraints in conducting a successful model inversion attack
is the amount of training data required. In this work, we focus on …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 4 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 4 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 4 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 4 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS