all InfoSec news
Adversarial Robustness is at Odds with Lazy Training. (arXiv:2207.00411v1 [cs.CR])
July 4, 2022, 1:20 a.m. | Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora
cs.CR updates on arXiv.org arxiv.org
Recent works show that random neural networks are vulnerable against
adversarial attacks [Daniely and Schacham, 2020] and that such attacks can be
easily found using a single step of gradient descent [Bubeck et al., 2021]. In
this work, we take it one step further and show that a single gradient step can
find adversarial examples for networks trained in the so-called lazy regime.
This regime is interesting because even though the neural network weights
remain close to the initialization, there …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer, Infrastructure Protection
@ Google | Hyderabad, Telangana, India
Senior Security Software Engineer
@ Microsoft | London, London, United Kingdom
Consultor Ciberseguridad (Cadiz)
@ Capgemini | Cádiz, M, ES
Cyber MS MDR - Sr Associate
@ KPMG India | Bengaluru, Karnataka, India
Privacy Engineer, Google Cloud Privacy
@ Google | Pittsburgh, PA, USA; Raleigh, NC, USA