all InfoSec news
Group-based Robustness: A General Framework for Customized Robustness in the Real World
March 12, 2024, 4:11 a.m. | Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another set of target classes. To address the shortcomings of existing methods, …
arxiv attacks cs.ai cs.cr cs.cv cs.lg evasion evasion attacks find framework general identify inputs machine measuring metrics real robustness threat vulnerable work world
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 10 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 10 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 10 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)
@ WWC Global | Reston, Virginia, United States
Security Architect (DevSecOps)
@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium
Infrastructure Security Architect
@ Ørsted | Kuala Lumpur, MY
Contract Penetration Tester
@ Evolve Security | United States - Remote
Senior Penetration Tester
@ DigitalOcean | Canada