Sept. 26, 2022, 1:20 a.m. | Chengyin Hu, Weiwen Shi

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) have achieved great success in many tasks.
Therefore, it is crucial to evaluate the robustness of advanced DNNs. The
traditional methods use stickers as physical perturbations to fool the
classifiers, which is difficult to achieve stealthiness and there exists
printing loss. Some new types of physical attacks use light beam to perform
attacks (e.g., laser, projector), whose optical patterns are artificial rather
than natural. In this work, we study a new type of physical attack, called …

attack physical world

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

SOC Cyber Threat Intelligence Expert

@ Amexio | Luxembourg, Luxembourg, Luxembourg

Systems Engineer - SecOps

@ Fortinet | Dubai, Dubai, United Arab Emirates

Ingénieur Cybersécurité Gouvernance des projets AMR H/F

@ ASSYSTEM | Lyon, France

Senior DevSecOps Consultant

@ Computacenter | Birmingham, GB, B37 7YS