all InfoSec news
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks. (arXiv:2111.12965v2 [cs.CR] UPDATED)
May 30, 2022, 1:20 a.m. | Xiangyu Qi, Tinghao Xie, Ruizhe Pan, Jifeng Zhu, Yong Yang, Kai Bu
cs.CR updates on arXiv.org arxiv.org
One major goal of the AI security community is to securely and reliably
produce and deploy deep learning models for real-world applications. To this
end, data poisoning based backdoor attacks on deep neural networks (DNNs) in
the production stage (or training stage) and corresponding defenses are
extensively explored in recent years. Ironically, backdoor attacks in the
deployment stage, which can often happen in unprofessional users' devices and
are thus arguably far more threatening in real-world scenarios, draw much less
attention …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Azure Security Architect
@ First Quality | Remote US - Eastern or Central Timezone
Threat Intelligence Analyst
@ Atos | Remote Home, HOME (England & Wales), GB, Remote Hom
Alternance (F/H) Hardening, migration cloud et containerisation d'un application windows
@ Alstom | Villeurbanne, FR
Security Specialist / Analist (CIT)
@ Lely | Maassluis, Netherlands