all InfoSec news
Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates
Feb. 28, 2024, 5:11 a.m. | Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine-learning models demand for periodic updates to improve their average accuracy, exploiting novel architectures and additional data. However, a newly-updated model may commit mistakes that the previous model did not make. Such misclassifications are referred to as negative flips, and experienced by users as a regression of performance. In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices. In particular, when updating …
accuracy adversarial architectures arxiv commit cs.cr cs.lg data demand exploiting machine machine learning may mistakes novel robustness training updates
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 2 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 2 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 2 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Compliance Advisor
@ SAP | Budapest, HU, 1031
DevSecOps Engineer
@ Qube Research & Technologies | London
Software Engineer, Security
@ Render | San Francisco, CA or Remote (USA & Canada)
Associate Consultant
@ Control Risks | Frankfurt, Hessen, Germany
Senior Security Engineer
@ Activision Blizzard | Work from Home - CA