July 19, 2022, 1:20 a.m. | Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, Zhou Yu

cs.CR updates on arXiv.org arxiv.org

With the increasing applications of language models, it has become crucial to
protect these models from leaking private information. Previous work has
attempted to tackle this challenge by training RNN-based language models with
differential privacy guarantees. However, applying classical differential
privacy to language models leads to poor model performance as the underlying
privacy notion is over-pessimistic and provides undifferentiated protection for
all tokens in the data. Given that the private information in natural language
is sparse (for example, the bulk …

differential privacy language modeling privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Associate DevSecOps Engineer

@ LinQuest | Los Angeles, California, United States

DORA Compliance Program Manager

@ Resillion | Brussels, Belgium

Head of Workplace Risk and Compliance

@ Wise | London, United Kingdom