April 26, 2023, 1:10 a.m. | Mathieu Dumont, Kevin Hector, Pierre-Alain Moellic, Jean-Max Dutertre, Simon Pontié

cs.CR updates on arXiv.org arxiv.org

Upcoming certification actions related to the security of machine learning
(ML) based systems raise major evaluation challenges that are amplified by the
large-scale deployment of models in many hardware platforms. Until recently,
most of research works focused on API-based attacks that consider a ML model as
a pure algorithmic abstraction. However, new implementation-based threats have
been revealed, emphasizing the urgency to propose both practical and
simulation-based methods to properly evaluate the robustness of models. A major
concern is parameter-based attacks …

abstraction actions api attack attacks certification challenges deployment embedded evaluation hardware injection large laser machine machine learning major ml model networks neural networks parameter platforms research robustness scale security simulation systems threats upcoming

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Security Engineer II- Full stack Java with React

@ JPMorgan Chase & Co. | Hyderabad, Telangana, India

Cybersecurity SecOps

@ GFT Technologies | Mexico City, MX, 11850

Senior Information Security Advisor

@ Sun Life | Sun Life Toronto One York

Contract Special Security Officer (CSSO) - Top Secret Clearance

@ SpaceX | Hawthorne, CA

Early Career Cyber Security Operations Center (SOC) Analyst

@ State Street | Quincy, Massachusetts