all InfoSec news
And/or trade-off in artificial neurons: impact on adversarial robustness. (arXiv:2102.07389v2 [cs.LG] UPDATED)
Feb. 4, 2022, 2:20 a.m. | Alessandro Fontana
cs.CR updates on arXiv.org arxiv.org
Since its discovery in 2013, the phenomenon of adversarial examples has
attracted a growing amount of attention from the machine learning community. A
deeper understanding of the problem could lead to a better comprehension of how
information is processed and encoded in neural networks and, more in general,
could help to solve the issue of interpretability in machine learning. Our idea
to increase adversarial resilience starts with the observation that artificial
neurons can be divided in two broad categories: AND-like …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cyber Security Cloud Solution Architect
@ Microsoft | London, London, United Kingdom
Compliance Program Analyst
@ SailPoint | United States
Software Engineer III, Infrastructure, Google Cloud Security and Privacy
@ Google | Sunnyvale, CA, USA
Cryptography Expert
@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine
Senior Cyber Intelligence Planner (15.09)
@ OCT Consulting, LLC | Washington, District of Columbia, United States