Nov. 30, 2022, 2:10 a.m. | Gyojin Han, Jaehyun Choi, Hyeong Gwon Hong, Junmo Kim

cs.CR updates on arXiv.org arxiv.org

Generally, regularization-based continual learning models limit access to the
previous task data to imitate the real-world setting which has memory and
privacy issues. However, this introduces a problem in these models by not being
able to track the performance on each task. In other words, current continual
learning methods are vulnerable to attacks done on the previous task. We
demonstrate the vulnerability of regularization-based continual learning
methods by presenting simple task-specific training time adversarial attack
that can be used in …

adversarial attack training vulnerability

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium