Oct. 21, 2022, 1:24 a.m. | Jie Huang, Hanyin Shao, Kevin Chen-Chuan Chang

cs.CR updates on arXiv.org arxiv.org

Are Large Pre-Trained Language Models Leaking Your Personal Information? In
this paper, we analyze whether Pre-Trained Language Models (PLMs) are prone to
leaking personal information. Specifically, we query PLMs for email addresses
with contexts of the email address or prompts containing the owner's name. We
find that PLMs do leak personal information due to memorization. However, since
the models are weak at association, the risk of specific personal information
being extracted by attackers is low. We hope this work could …

information language large personal personal information

Information Security Engineers

@ D. E. Shaw Research | New York City

GG9b-Assoc Eng II, Services

@ HARMAN International | IN Bengaluru EOIZ Indust Area Campus HCS

Lead Security Operations Engineer

@ S&P Global | US - NY New York City - 55 WATER ST 35 HRS

Information Systems Security Manager (ISSM)

@ STR | Arlington, VA

Sr. Site Reliability Engineer - Incident Response

@ HashiCorp | India - Bengaluru

Function Cluster Architect Product Security

@ ASML | Veldhoven, Building 03, Netherlands