March 5, 2024, 3:11 p.m. | Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.00871v1 Announce Type: new
Abstract: When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10% attack success rates, at times, …

adversary arxiv attack call can cs.ai cs.cl cs.cr cs.lg data extraction information language language models large llms phish phishing privacy privacy risk private private data risk sensitive sensitive information stealing teach work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Network Security Engineer

@ Meta | Menlo Park, CA | Remote, US

Security Engineer, Investigations - i3

@ Meta | Washington, DC

Threat Investigator- Security Analyst

@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC

Security Operations Engineer II

@ Microsoft | Redmond, Washington, United States

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States