all InfoSec news
Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses. (arXiv:2106.08746v4 [cs.LG] UPDATED)
Sept. 26, 2022, 1:20 a.m. | Buse G. A. Tekgul, Shelly Wang, Samuel Marchal, N. Asokan
cs.CR updates on arXiv.org arxiv.org
Deep reinforcement learning (DRL) is vulnerable to adversarial perturbations.
Adversaries can mislead the policies of DRL agents by perturbing the state of
the environment observed by the agents. Existing attacks are feasible in
principle, but face challenges in practice, either by being too slow to fool
DRL policies in real time or by modifying past observations stored in the
agent's memory. We show that Universal Adversarial Perturbations (UAP),
independent of the individual inputs to which they are applied, can fool …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Lead Product Security Engineer
@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER
Penetration Tester
@ BT Group | Hemel Hempstead: Riverside (R6, Hemel Hempstead, United Kingdom
Cloud and Infrastructure Security Engineer II
@ StubHub | Los Angeles, CA