Sept. 20, 2023, 1:10 a.m. | Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli

cs.CR updates on arXiv.org arxiv.org

Machine-learning models can be fooled by adversarial examples, i.e.,
carefully-crafted input perturbations that force models to output wrong
predictions. While uncertainty quantification has been recently proposed to
detect adversarial inputs, under the assumption that such attacks exhibit a
higher prediction uncertainty than pristine data, it has been shown that
adaptive attacks specifically aimed at reducing also the uncertainty estimate
can easily bypass this defense mechanism. In this work, we focus on a different
adversarial scenario in which the attacker is …

adversarial adversarial attacks attacks data detect higher input inputs machine prediction predictions quantification uncertainty under wrong

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Premium Hub - CoE: Business Process Senior Consultant, SAP Security Role and Authorisations & GRC

@ SAP | Dublin 24, IE, D24WA02

Product Security Response Engineer

@ Intel | CRI - Belen, Heredia

Application Security Architect

@ Uni Systems | Brussels, Brussels, Belgium

Sr Product Security Engineer

@ ServiceNow | Hyderabad, India

Analyst, Cybersecurity & Technology (Initial Application Deadline May 20th, Final Deadline May 31st)

@ FiscalNote | United Kingdom (UK)