all InfoSec news
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Feb. 15, 2024, 5:10 a.m. | Liang Zhang, Bingcong Li, Kiran Koshy Thekumparampil, Sewoong Oh, Niao He
cs.CR updates on arXiv.org arxiv.org
Abstract: The widespread practice of fine-tuning large language models (LLMs) on domain-specific data faces two major challenges in memory and privacy. First, as the size of LLMs continues to grow, the memory demands of gradient-based training methods via backpropagation become prohibitively high. Second, given the tendency of LLMs to memorize training data, it is important to protect potentially sensitive information in the fine-tuning data from being regurgitated. Zeroth-order methods, which rely solely on forward passes, substantially …
arxiv challenges cs.cr cs.lg data demands domain fine-tuning high language language models large llms major math.oc memory practice privacy private size stat.ml training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Information System Security Engineer 2
@ Wyetech | Annapolis Junction, Maryland
Staff Vulnerability/Configuration Management Security Engineer
@ ServiceNow | Hyderabad, India
Security Engineer
@ AXS | London, England, UK