all InfoSec news
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Feb. 20, 2024, 5:11 a.m. | Leo Hyun Park, Jaeuk Kim, Myung Gyo Oh, Jaewoo Park, Taekyoung Kwon
cs.CR updates on arXiv.org arxiv.org
Abstract: Deep learning models continue to advance in accuracy, yet they remain vulnerable to adversarial attacks, which often lead to the misclassification of adversarial examples. Adversarial training is used to mitigate this problem by increasing robustness against these attacks. However, this approach typically reduces a model's standard accuracy on clean, non-adversarial samples. The necessity for deep learning models to balance both robustness and accuracy for security is obvious, but achieving this balance remains challenging, and the …
accuracy adversarial adversarial attacks alignment arxiv attacks continue cs.cr cs.cv cs.lg deep learning examples feature problem robustness training vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 1 hour ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 1 hour ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Compliance Advisor
@ SAP | Budapest, HU, 1031
DevSecOps Engineer
@ Qube Research & Technologies | London
Software Engineer, Security
@ Render | San Francisco, CA or Remote (USA & Canada)
Associate Consultant
@ Control Risks | Frankfurt, Hessen, Germany
Senior Security Engineer
@ Activision Blizzard | Work from Home - CA