all InfoSec news
Circumventing Backdoor Defenses That Are Based on Latent Separability. (arXiv:2205.13613v1 [cs.LG])
May 30, 2022, 1:20 a.m. | Xiangyu Qi, Tinghao Xie, Saeed Mahloujifar, Prateek Mittal
cs.CR updates on arXiv.org arxiv.org
Deep learning models are vulnerable to backdoor poisoning attacks. In
particular, adversaries can embed hidden backdoors into a model by only
modifying a very small portion of its training data. On the other hand, it has
also been commonly observed that backdoor poisoning attacks tend to leave a
tangible signature in the latent space of the backdoored model i.e. poison
samples and clean samples form two separable clusters in the latent space.
These observations give rise to the popularity of …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Level 1 SOC Analyst
@ Telefonica Tech | Dublin, Ireland
Specialist, Database Security
@ OP Financial Group | Helsinki, FI
Senior Manager, Cyber Offensive Security
@ Edwards Lifesciences | Poland-Remote
Information System Security Officer
@ Booz Allen Hamilton | USA, AL, Huntsville (4200 Rideout Rd SW)
Senior Security Analyst - Protective Security (Open to remote across ANZ)
@ Canva | Sydney, Australia