April 10, 2023, 1:10 a.m. | Maria-Florina Balcan, Steve Hanneke, Rattana Pukdee, Dravyansh Sharma

cs.CR updates on arXiv.org arxiv.org

Machine learning algorithms are often used in environments which are not
captured accurately even by the most carefully obtained training data, either
due to the possibility of `adversarial' test-time attacks, or on account of
`natural' distribution shift. For test-time attacks, we introduce and analyze a
novel robust reliability guarantee, which requires a learner to output
predictions along with a reliability radius $\eta$, with the meaning that its
prediction is guaranteed to be correct as long as the adversary has not …

account adversarial adversary algorithms attacks data distribution environments guarantee machine machine learning machine learning algorithms novel point prediction predictions radius reliability test training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Data & Security Engineer Lead

@ LiquidX | Singapore, Central Singapore, Singapore

IT and Cyber Risk Control Lead

@ GXS Bank | Singapore - OneNorth

Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F

@ Hifield | Sèvres, France

Cyber Security Analyst (Weekend 1st Shift)

@ Fortress Security Risk Management | Cleveland, OH, United States

Senior Manager, Cybersecurity

@ BlueTriton Brands | Stamford, CT, US