Aug. 16, 2022, 1:20 a.m. | Zeyan Liu, Fengjun Li, Jingqiang Lin, Zhu Li, Bo Luo

cs.CR updates on arXiv.org arxiv.org

With the growing popularity of artificial intelligence and machine learning,
a wide spectrum of attacks against deep learning models have been proposed in
the literature. Both the evasion attacks and the poisoning attacks attempt to
utilize adversarially altered samples to fool the victim model to misclassify
the adversarial sample. While such attacks claim to be or are expected to be
stealthy, i.e., imperceptible to human eyes, such claims are rarely evaluated.
In this paper, we present the first large-scale study …

attacks deep learning hide systems

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Systems Security Officer (ISSO) (Remote within HR Virginia area)

@ OneZero Solutions | Portsmouth, VA, USA

Security Analyst

@ UNDP | Tripoli (LBY), Libya

Senior Incident Response Consultant

@ Google | United Kingdom

Product Manager II, Threat Intelligence, Google Cloud

@ Google | Austin, TX, USA; Reston, VA, USA

Cloud Security Analyst

@ Cloud Peritus | Bengaluru, India