May 6, 2024, 4:11 a.m. | Fan Wu, Huseyin A. Inan, Arturs Backurs, Varun Chandrasekaran, Janardhan Kulkarni, Robert Sim

cs.CR updates on arXiv.org arxiv.org

arXiv:2310.16960v2 Announce Type: replace-cross
Abstract: Positioned between pre-training and user deployment, aligning large language models (LLMs) through reinforcement learning (RL) has emerged as a prevailing strategy for training instruction following-models such as ChatGPT. In this work, we initiate the study of privacy-preserving alignment of LLMs through Differential Privacy (DP) in conjunction with RL. Following the influential work of Ziegler et al. (2020), we study two dominant paradigms: (i) alignment via RL without human in the loop (e.g., positive review generation) …

alignment arxiv chatgpt cs.cr cs.lg deployment differential privacy language language models large llms privacy privately strategy study training work

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior - Penetration Tester

@ Deloitte | Madrid, España

Associate Cyber Incident Responder

@ Highmark Health | PA, Working at Home - Pennsylvania

Senior Insider Threat Analyst

@ IT Concepts Inc. | Woodlawn, Maryland, United States