March 29, 2023, 1:10 a.m. | Bakary Badjie, José Cecílio, António Casimiro

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks significantly threaten the robustness of deep neural
networks (DNNs). Despite the multiple defensive methods employed, they are
nevertheless vulnerable to poison attacks, where attackers meddle with the
initial training data. In order to defend DNNs against such adversarial
attacks, this work proposes a novel method that combines the defensive
distillation mechanism with a denoising autoencoder (DAE). This technique tries
to lower the sensitivity of the distilled model to poison attacks by spotting
and reconstructing poisonous adversarial inputs in …

adversarial adversarial attacks algorithm attackers attacks data defensive inputs networks neural networks novel order robustness training vulnerable work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Operations Analyst

@ Astranis | San Francisco

Manager - Business continuity Security and Safety.Risk and Compliance

@ MTN | Benin

Cyber Analyst, Digital Forensics Incident Response

@ At-Bay | Canada

Technical Product Manager, AppSec and DevSecOps

@ Penn Interactive | Philadelphia

Experienced Cloud Security Engineer (m/f/d) - Cybersecurity

@ MediaMarktSaturn | Barcelona, ES, 8003