July 11, 2022, 1:20 a.m. | Daniel Paleka, Amartya Sanyal

cs.CR updates on arXiv.org arxiv.org

In supervised learning, it has been shown that label noise in the data can be
interpolated without penalties on test accuracy under many circumstances. We
show that interpolating label noise induces adversarial vulnerability, and
prove the first theorem showing the dependence of label noise and adversarial
risk in terms of the data distribution. Our results are almost sharp without
accounting for the inductive bias of the learning algorithm. We also show that
inductive bias makes the effect of label noise …

adversarial law ml risk

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Security Cloud Solution Architect

@ Microsoft | London, London, United Kingdom

Compliance Program Analyst

@ SailPoint | United States

Software Engineer III, Infrastructure, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Cryptography Expert

@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine

Senior Cyber Intelligence Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States