June 16, 2022, 1:20 a.m. | Aditya Kuppa, Nhien-An Le-Khac

cs.CR updates on arXiv.org arxiv.org

Deploying robust machine learning models has to account for concept drifts
arising due to the dynamically changing and non-stationary nature of data.
Addressing drifts is particularly imperative in the security domain due to the
ever-evolving threat landscape and lack of sufficiently labeled training data
at the deployment time leading to performance degradation. Recently proposed
concept drift detection methods in literature tackle this problem by
identifying the changes in feature/data distributions and periodically
retraining the models to learn new concepts. While …

detection domain learn security

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Systems Security Officer (ISSO), Junior

@ Dark Wolf Solutions | Remote / Dark Wolf Locations

Cloud Security Engineer

@ ManTech | REMT - Remote Worker Location

SAP Security & GRC Consultant

@ NTT DATA | HYDERABAD, TG, IN

Security Engineer 2 - Adversary Simulation Operations

@ Datadog | New York City, USA