Feb. 12, 2024, 5:10 a.m. | Hanyin Shao Jie Huang Shen Zheng Kevin Chen-Chuan Chang

cs.CR updates on arXiv.org arxiv.org

The advancement of large language models (LLMs) brings notable improvements across various applications, while simultaneously raising concerns about potential private data exposure. One notable capability of LLMs is their ability to form associations between different pieces of information, but this raises concerns when it comes to personally identifiable information (PII). This paper delves into the association capabilities of language models, aiming to uncover the factors that influence their proficiency in associating information. Our study reveals that as models scale up, …

advancement applications capabilities cs.ai cs.cl cs.cr data data exposure exposure information language language models large llms privacy private private data

Lead Security Specialist

@ Fujifilm | Holly Springs, NC, United States

Security Operations Centre Analyst

@ Deliveroo | Hyderabad, India (Main Office)

CISOC Analyst

@ KCB Group | Kenya

Lead Security Engineer – Red Team/Offensive Security

@ FICO | Work from Home, United States

Cloud Security SME

@ Maveris | Washington, District of Columbia, United States - Remote

SOC Analyst (m/w/d)

@ Bausparkasse Schwäbisch Hall | Schwäbisch Hall, DE