all InfoSec news
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions. (arXiv:2306.05873v1 [cs.LG])
June 12, 2023, 1:10 a.m. | Ezgi Korkmaz, Jonah Brown-Cohen
cs.CR updates on arXiv.org arxiv.org
Learning in MDPs with highly complex state representations is currently
possible due to multiple advancements in reinforcement learning algorithm
design. However, this incline in complexity, and furthermore the increase in
the dimensions of the observation came at the cost of volatility that can be
taken advantage of via adversarial attacks (i.e. moving along worst-case
directions in the observation space). To solve this policy instability problem
we propose a novel method to detect the presence of these non-robust directions
via local …
adversarial algorithm complexity cost design state taken volatility
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Cyber Security Culture – Communication and Content Specialist
@ H&M Group | Stockholm, Sweden
Container Hardening, Sr. (Remote | Top Secret)
@ Rackner | San Antonio, TX
GRC and Information Security Analyst
@ Intertek | United States
Information Security Officer
@ Sopra Steria | Bristol, United Kingdom
Casual Area Security Officer South Down Area
@ TSS | County Down, United Kingdom