all InfoSec news
Transferring Adversarial Robustness Through Robust Representation Matching. (arXiv:2202.09994v2 [cs.LG] UPDATED)
May 9, 2022, 1:20 a.m. | Pratik Vaishnavi, Kevin Eykholt, Amir Rahmati
cs.CR updates on arXiv.org arxiv.org
With the widespread use of machine learning, concerns over its security and
reliability have become prevalent. As such, many have developed defenses to
harden neural networks against adversarial examples, imperceptibly perturbed
inputs that are reliably misclassified. Adversarial training in which
adversarial examples are generated and used during training is one of the few
known defenses able to reliably withstand such attacks against neural networks.
However, adversarial training imposes a significant training overhead and
scales poorly with model complexity and input …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Level 1 SOC Analyst
@ Telefonica Tech | Dublin, Ireland
Specialist, Database Security
@ OP Financial Group | Helsinki, FI
Senior Manager, Cyber Offensive Security
@ Edwards Lifesciences | Poland-Remote
Information System Security Officer
@ Booz Allen Hamilton | USA, AL, Huntsville (4200 Rideout Rd SW)
Senior Security Analyst - Protective Security (Open to remote across ANZ)
@ Canva | Sydney, Australia