Web: http://arxiv.org/abs/2209.10732

Sept. 23, 2022, 1:24 a.m. | Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot

cs.CR updates on arXiv.org arxiv.org

When learning from sensitive data, care must be taken to ensure that training
algorithms address privacy concerns. The canonical Private Aggregation of
Teacher Ensembles, or PATE, computes output labels by aggregating the
predictions of a (possibly distributed) collection of teacher models via a
voting mechanism. The mechanism adds noise to attain a differential privacy
guarantee with respect to the teachers' training data. In this work, we observe
that this use of noise, which makes PATE predictions stochastic, enables new
forms …

differential privacy privacy truth vote

More from arxiv.org / cs.CR updates on arXiv.org

Artificial Intelligence and Cybersecurity Researcher

@ NavInfo Europe BV | Eindhoven, Netherlands

Senior Security Engineer (E5) - Infrastructure Security

@ Netflix | Remote, United States

Sr. Security Engineer (Infrastructure)

@ SpaceX | Hawthorne, CA or Redmond, WA or Washington, DC

Senior Global Security Compliance Analyst

@ Snowflake Inc. | Warsaw, Poland

Staff Security Engineer, Threat Hunt & Research (L4)

@ Twilio | Remote - Ireland

Junior Cybersecurity Engineer

@ KUDO | Buenos Aires

iOS Engineer (hybrid / flexibility / cybersecurity)

@ Qustodio | Barcelona, Spain

Security Engineer

@ Binance.US | U.S. Remote

Senior Information Systems Security Officer (ISSO)

@ Sigma Defense | Fayetteville, North Carolina, United States

ATGPAC Battle Lab - Ballistic Missile Defense Commander/Operations Manager

@ Sigma Defense | San Diego, California, United States

Cyber Security - Head of Infrastructure m/f

@ DataDome | Paris

Backend Engineer, Govern: Threat Insights

@ GitLab | Remote