Feb. 2, 2022, 2:20 a.m. | Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe

cs.CR updates on arXiv.org arxiv.org

Despite our best efforts, deep learning models remain highly vulnerable to
even tiny adversarial perturbations applied to the inputs. The ability to
extract information from solely the output of a machine learning model to craft
adversarial perturbations to black-box models is a practical threat against
real-world systems, such as autonomous cars or machine learning models exposed
as a service (MLaaS). Of particular interest are sparse attacks. The
realization of sparse attacks in black-box models demonstrates that machine
learning models are …

attacks box decision deep learning lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Intermediate Security Engineer, (Incident Response, Trust & Safety)

@ GitLab | Remote, US

Journeyman Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Project Manager II - Compliance

@ Critical Path Institute | Tucson, AZ, USA

Junior System Engineer (m/w/d) Cyber Security 1

@ Deutsche Telekom | Leipzig, Deutschland