June 1, 2022, 1:20 a.m. | Zeyan Liu, Fengjun Li, Jingqiang Lin, Zhu Li, Bo Luo

cs.CR updates on arXiv.org arxiv.org

With the growing popularity of artificial intelligence and machine learning,
a wide spectrum of attacks against deep learning models have been proposed in
the literature. Both the evasion attacks and the poisoning attacks attempt to
utilize adversarially altered samples to fool the victim model to misclassify
the adversarial sample. While such attacks claim to be or are expected to be
stealthy, i.e., imperceptible to human eyes, such claims are rarely evaluated.
In this paper, we present the first large-scale study …

attacks deep learning hide systems

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Level 1 SOC Analyst

@ Telefonica Tech | Dublin, Ireland

Specialist, Database Security

@ OP Financial Group | Helsinki, FI

Senior Manager, Cyber Offensive Security

@ Edwards Lifesciences | Poland-Remote

Information System Security Officer

@ Booz Allen Hamilton | USA, AL, Huntsville (4200 Rideout Rd SW)

Senior Security Analyst - Protective Security (Open to remote across ANZ)

@ Canva | Sydney, Australia