May 31, 2023, 1:10 a.m. | Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot

cs.CR updates on arXiv.org arxiv.org

Training reliable deep learning models which avoid making overconfident but
incorrect predictions is a longstanding challenge. This challenge is further
exacerbated when learning has to be differentially private: protection provided
to sensitive data comes at the price of injecting additional randomness into
the learning process. In this work, we conduct a thorough empirical
investigation of selective classifiers -- that can abstain when they are unsure
-- under a differential privacy constraint. We find that several popular
selective prediction approaches are …

challenge data deep learning don making predictions private process protection randomness sensitive data training work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Principal Business Value Consultant

@ Palo Alto Networks | Chicago, IL, United States

Cybersecurity Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Penetration Testing Engineer- Remote United States

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700

Internal Audit- Compliance & Legal Audit-Dallas-Associate

@ Goldman Sachs | Dallas, Texas, United States

Threat Responder

@ Deepwatch | Remote