July 18, 2022, 1:20 a.m. | Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, S

cs.CR updates on arXiv.org arxiv.org

We give simpler, sparser, and faster algorithms for differentially private
fine-tuning of large-scale pre-trained language models, which achieve the
state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks.
We propose a meta-framework for this problem, inspired by the recent success of
highly parameter-efficient methods for fine-tuning. Our experiments show that
differentially private adaptations of these approaches outperform previous
private algorithms in three important dimensions: utility, privacy, and the
computational and memory cost of private training. On many commonly studied …

language lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Architect - Hardware

@ Intel | IND - Bengaluru

Elastic Consultant

@ Elastic | Spain

OT Cybersecurity Specialist

@ Emerson | Abu Dhabi, United Arab Emirates

Security Operations Program Manager

@ Kaseya | Miami, Florida, United States

Senior Security Operations Engineer

@ Revinate | Vancouver