Jan. 26, 2024, 2:10 a.m. | Yifan Hao, Tong Zhang

cs.CR updates on arXiv.org arxiv.org

Recent empirical and theoretical studies have established the generalization
capabilities of large machine learning models that are trained to
(approximately or exactly) fit noisy data. In this work, we prove a surprising
result that even if the ground truth itself is robust to adversarial examples,
and the benignly overfitted model is benign in terms of the ``standard''
out-of-sample risk objective, this benign overfitting process can be harmful
when out-of-sample data are subject to adversarial manipulation. More
specifically, our main results …

adversarial arxiv capabilities data examples large machine machine learning machine learning models noisy prove result robustness studies truth work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States