Feb. 28, 2024, 5:11 a.m. | Jeffrey G. Wang, Jason Wang, Marvin Li, Seth Neel

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.17012v1 Announce Type: new
Abstract: In this paper we undertake a systematic study of privacy attacks against open source Large Language Models (LLMs), where an adversary has access to either the model weights, gradients, or losses, and tries to exploit them to learn something about the underlying training data. Our headline results are the first membership inference attacks (MIAs) against pre-trained LLMs that are able to simultaneously achieve high TPRs and low FPRs, and a pipeline showing that over $50\%$ …

access adversary arxiv attacks box cs.ai cs.cr cs.lg data data leakage exploit language language models large learn llms losses open source pandora privacy study training training data

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Principal Security Analyst - Threat Labs (Position located in India) (Remote)

@ KnowBe4, Inc. | Kochi, India

Cyber Security - Cloud Security and Security Architecture - Manager - Multiple Positions - 1500860

@ EY | Dallas, TX, US, 75219

Enterprise Security Architect (Intermediate)

@ Federal Reserve System | Remote - Virginia

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States

Vulnerability Management Team Lead - North Central region (Remote)

@ GuidePoint Security LLC | Remote in the United States