all InfoSec news
Guided Diffusion Model for Adversarial Purification from Random Noise. (arXiv:2206.10875v1 [cs.LG])
June 23, 2022, 1:20 a.m. | Quanlin Wu, Hang Ye, Yuntian Gu
cs.CR updates on arXiv.org arxiv.org
In this paper, we propose a novel guided diffusion purification approach to
provide a strong defense against adversarial attacks. Our model achieves 89.62%
robust accuracy under PGD-L_inf attack (eps = 8/255) on the CIFAR-10 dataset.
We first explore the essential correlations between unguided diffusion models
and randomized smoothing, enabling us to apply the models to certified
robustness. The empirical results show that our models outperform randomized
smoothing by 5% when the certified L2 radius r is larger than 0.5.
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Systems Security Officer (ISSO) (Remote within HR Virginia area)
@ OneZero Solutions | Portsmouth, VA, USA
Security Analyst
@ UNDP | Tripoli (LBY), Libya
Senior Incident Response Consultant
@ Google | United Kingdom
Product Manager II, Threat Intelligence, Google Cloud
@ Google | Austin, TX, USA; Reston, VA, USA
Cloud Security Analyst
@ Cloud Peritus | Bengaluru, India