Nov. 6, 2023, 2:10 a.m. | Donghua Wang, Wen Yao, Tingsong Jiang, Xiaoqian Chen

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) are demonstrated to be vulnerable to universal
perturbation, a single quasi-perceptible perturbation that can deceive the DNN
on most images. However, the previous works are focused on using universal
perturbation to perform adversarial attacks, while the potential usability of
universal perturbation as data carriers in data hiding is less explored,
especially for the key-controlled data hiding method. In this paper, we propose
a novel universal perturbation-based secret key-controlled data-hiding method,
realizing data hiding with a single …

adversarial adversarial attacks attacks carriers data images key networks neural networks secret secret key single usability vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Senior Security Researcher - Linux MacOS EDR (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Sr. Manager, NetSec GTM Programs

@ Palo Alto Networks | Santa Clara, CA, United States

SOC Analyst I

@ Fortress Security Risk Management | Cleveland, OH, United States