March 5, 2024, 3:12 p.m. | Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan

cs.CR updates on arXiv.org arxiv.org

arXiv:2102.06202v3 Announce Type: replace-cross
Abstract: In real-world settings involving consequential decision-making, the deployment of machine learning systems generally requires both reliable uncertainty quantification and protection of individuals' privacy. We present a framework that treats these two desiderata jointly. Our framework is based on conformal prediction, a methodology that augments predictive models to return prediction sets that provide uncertainty quantification -- they provably cover the true response with a user-specified probability, such as 90%. One might hope that when used with …

arxiv cs.ai cs.cr cs.lg decision deployment framework machine machine learning making methodology prediction privacy private protection quantification real return settings stat.me stat.ml systems uncertainty world

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)