Nov. 6, 2023, 2:10 a.m. | Donghua Wang, Wen Yao, Tingsong Jiang, Xiaoqian Chen

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) are demonstrated to be vulnerable to universal
perturbation, a single quasi-perceptible perturbation that can deceive the DNN
on most images. However, the previous works are focused on using universal
perturbation to perform adversarial attacks, while the potential usability of
universal perturbation as data carriers in data hiding is less explored,
especially for the key-controlled data hiding method. In this paper, we propose
a novel universal perturbation-based secret key-controlled data-hiding method,
realizing data hiding with a single …

adversarial adversarial attacks attacks carriers data images key networks neural networks secret secret key single usability vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States