all InfoSec news
Architectural Backdoors in Neural Networks. (arXiv:2206.07840v1 [cs.LG])
June 17, 2022, 1:20 a.m. | Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot
cs.CR updates on arXiv.org arxiv.org
Machine learning is vulnerable to adversarial manipulation. Previous
literature has demonstrated that at the training stage attackers can manipulate
data and data sampling procedures to control model behaviour. A common attack
goal is to plant backdoors i.e. force the victim model to learn to recognise a
trigger known only by the adversary. In this paper, we introduce a new class of
backdoor attacks that hide inside model architectures i.e. in the inductive
bias of the functions used to train. These …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
SOC Cyber Threat Intelligence Expert
@ Amexio | Luxembourg, Luxembourg, Luxembourg
Systems Engineer - SecOps
@ Fortinet | Dubai, Dubai, United Arab Emirates
Ingénieur Cybersécurité Gouvernance des projets AMR H/F
@ ASSYSTEM | Lyon, France
Senior DevSecOps Consultant
@ Computacenter | Birmingham, GB, B37 7YS