all InfoSec news
Introducing Adaptive Continuous Adversarial Training (ACAT) to Enhance ML Robustness
March 18, 2024, 4:11 a.m. | Mohamed elShehaby, Aditya Kotha, Ashraf Matrawy
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine Learning (ML) is susceptible to adversarial attacks that aim to trick ML models, making them produce faulty predictions. Adversarial training was found to increase the robustness of ML models against these attacks. However, in network and cybersecurity, obtaining labeled training and adversarial training data is challenging and costly. Furthermore, concept drift deepens the challenge, particularly in dynamic domains like network and cybersecurity, and requires various models to conduct periodic retraining. This letter introduces Adaptive …
adversarial adversarial attacks aim arxiv attacks continuous cs.cr cs.lg cs.ni cybersecurity data found machine machine learning making ml models network predictions robustness training training data
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 11 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 11 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 11 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network Security Engineer
@ Meta | Menlo Park, CA | Remote, US
Security Engineer, Investigations - i3
@ Meta | Washington, DC
Threat Investigator- Security Analyst
@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC
Security Operations Engineer II
@ Microsoft | Redmond, Washington, United States
Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas
@ Goldman Sachs | Dallas, Texas, United States